You are viewing a plain text version of this content. The canonical link for it is here.
Posted to user@geode.apache.org by Jagdish Mirani <jm...@pivotal.io> on 2019/03/04 17:11:22 UTC

Submit your Geode Summit 2019 session proposals, register to attend

Hello Apache Geode community:
Join us this October 7-10 for our fourth Geode Summit. As with prior years,
the Geode Summit is being held in conjunction with the SpringOne Platform
conference <https://springoneplatform.io/>, this time in Austin Texas on
Oct 7-10, 2019.

This is a great opportunity to share your Geode knowledge, success stories,
and best practices. It's also a great opportunity to learn from others. In
prior years we've seen a tremendous amount of useful knowledge shared by
the community (videos: 2018
<https://www.youtube.com/playlist?list=PL62pIycqXx-ShuJ1YpV2wlmNZodnqzY8T>,
2017
<https://www.youtube.com/playlist?list=PL62pIycqXx-QfmNrUmfKoTZXKU5K90JpK>,
2016
<https://www.youtube.com/playlist?list=PL62pIycqXx-RCEP2APj4Mq9Oeyvskj_0K>
).

*The call for papers <https://springoneplatform.io/2019/cfp> is now open -
just indicate Geode as the topic when you make your submission.*

*Interested in Attending?*
Even if you're not presenting, it would be great if you could still attend.

As before, there will be a special contiguous half-day block of Geode
sessions on the Monday of the conference (Monday Oct 7th, from 1-6PM),
followed by a number of Geode sessions on Tues-Thurs of the same week.
There are two ways to attend:

   - A full conference registration entitles you to attend any of the Geode
   sessions, including the Monday Oct 7th, half day Geode block. The full
   conference pass prices do go up over time, so it's important to register
   early. In addition to the early bird discount, you can use the following
   discount code for an additional $200 off the full conference pass:
   *S1P200_JMirani*.

   - We've added a reduced price Monday only option for those who only want
   to attend the Monday Geode sessions.

Here's the registration link <https://springoneplatform.io/register>.

We hope to see you in Austin this fall!

Regards
Jag

Re: Submit your Geode Summit 2019 session proposals, register to attend

Posted by Anthony Baker <ab...@apache.org>.
Sounds like a great event and I encourage you to a) submit a talk and b)
attend.

I also want to note that ApacheCon will be in Las Vegas this year [1] and
it would be great to see some Geode talks at that event as well.

Anthony

[1] https://www.apachecon.com/acna19/index.html



On Mar 4, 2019, at 9:11 AM, Jagdish Mirani <jm...@pivotal.io> wrote:

Hello Apache Geode community:
Join us this October 7-10 for our fourth Geode Summit. As with prior years,
the Geode Summit is being held in conjunction with the SpringOne Platform
conference <https://springoneplatform.io/>, this time in Austin Texas on
Oct 7-10, 2019.

This is a great opportunity to share your Geode knowledge, success stories,
and best practices. It's also a great opportunity to learn from others. In
prior years we've seen a tremendous amount of useful knowledge shared by
the community (videos: 2018
<https://www.youtube.com/playlist?list=PL62pIycqXx-ShuJ1YpV2wlmNZodnqzY8T>,
2017
<https://www.youtube.com/playlist?list=PL62pIycqXx-QfmNrUmfKoTZXKU5K90JpK>,
2016
<https://www.youtube.com/playlist?list=PL62pIycqXx-RCEP2APj4Mq9Oeyvskj_0K>
).

*The call for papers <https://springoneplatform.io/2019/cfp> is now open -
just indicate Geode as the topic when you make your submission.*

*Interested in Attending?*
Even if you're not presenting, it would be great if you could still attend.

As before, there will be a special contiguous half-day block of Geode
sessions on the Monday of the conference (Monday Oct 7th, from 1-6PM),
followed by a number of Geode sessions on Tues-Thurs of the same week.
There are two ways to attend:

   - A full conference registration entitles you to attend any of the Geode
   sessions, including the Monday Oct 7th, half day Geode block. The full
   conference pass prices do go up over time, so it's important to register
   early. In addition to the early bird discount, you can use the following
   discount code for an additional $200 off the full conference pass:
   *S1P200_JMirani*.

   - We've added a reduced price Monday only option for those who only want
   to attend the Monday Geode sessions.

Here's the registration link <https://springoneplatform.io/register>.

We hope to see you in Austin this fall!

Regards
Jag

Re: Re: Cannot load JDBC driver class

Posted by Rick Fincher <rn...@tbird.com>.
----- Original Message -----
From: "Shyly Amarasinghe" <am...@dpw.com>
To: "Tomcat Users List" <to...@jakarta.apache.org>
Sent: Friday, April 18, 2003 12:46 PM
Subject: Re: Re: Cannot load JDBC driver class


> You're beautiful!
>
> I already had the DBCP jar files in place, but following your suggestions,
I used the xml for oracle rather than mysql and moved everything out of the
server.xml's global JNDI into context, and it worked! (Got Connection
org.apache.commons.dbcp.PoolableConnection@4rfg345).  I guess this means I
can make pooled connections within a webapp, which is much more than I could
do before.
>
> Thank you so much for all of your help - I could not have done this
without you.
> Shyly
>
> At 10:07 PM 4/17/2003 -0400, you wrote:
>
> >> Um, is TOMCAT supposed to be an environment variable?
> >
> >No, not for Tomcat anyway.
> >
> >It sounds like you have the usual problem stuff right, so it's got to be
in
> >the setup in the DataSource end of it.
> >
> >I see a couple of things here.  First off you are not using any
connection
> >pooling, does the Sybase JDBC driver pool internally?
> >
> >If not, I'd suggest using the setup in the JNDI DataSource HowTo with
DBCP.
> >It just involves dropping the jar files listed in the HowTo into
common/lib.
> >Your performance will be much better.
> >
> >I wouldn't use Administrator yet to do this type of setup, it is still
not
> >100%.  Set up your server.xml file manually.
> >
> >Also, is the Sybase driver type 4? You may need a type 4 driver to
connect
> >directly as a DataSource, but DBCP will use earlier drivers and provide
the
> >necessary interfaces in its DataSource Factory.
> >
> >Also, the examples for MySQL with DBCP do not use the Global JNDI area
and
> >they worked for me first time even though I use a different database.  I
> >just changed the driver name and URL.
> >
> >If you prefer to use the Sybase driver unpooled, try moving all your
stuff
> >into the context and out of the Global JNDI area in server.xml.
> >
> >Also try changing your Java code (I don't remember which form you used)
from
> >this:
> >
> >// Obtain our environment naming context
> >Context initCtx = new InitialContext();
> >Context envCtx = (Context) initCtx.lookup("java:comp/env");
> >
> >// Look up our data source
> >DataSource ds = (DataSource)
> >  envCtx.lookup("jdbc/EmployeeDB");
> >
> >// Allocate and use a connection from the pool
> >Connection conn = ds.getConnection();
> >.... use this connection to access the database ...
> >conn.close();
> >To this:
> >
> >
> >try{
> >      Context ctx = new InitialContext();
> >      if(ctx == null )
> >          throw new Exception("Boom - No Context");
> >
> >      DataSource ds =
> >            (DataSource)ctx.lookup("java:comp/env/jdbc/TestDB");
> >
> >      if (ds != null) {
> >        Connection conn = ds.getConnection();
> >
> >
> >Hope this helps!
> >
> >Rick
> >
> >>
> >> Everything else (sevlet, taglib) seems to work fine.  fyi, I set up the
> >datasource using the tomcat administrator under Resources -> Datasources.
> >(I also tried editing server.xml manually to create it, but got an error
> >there as well.)  Also worth noting, when I go through the administrator
to
> >Tomcat server -> Server -> Host -> Context (/para) -> Resources ->
> >Datasources, I get an error message "org.apache.jasper.JasperException:
> >Exception retrieving attribute 'driverClassName'" which is confusing
since
> >that attribute is defined in server.xml.  This error message isn't in any
of
> >the other webapps.  If I take the <resource-ref> code out of web.xml,
that
> >error message goes away too.
> >>
> >> You're being very patient and helpful - thank you very much!
> >>
> >> Here is webapps\para\web-inf\web.xml
> >> <?xml version="1.0" encoding="ISO-8859-1"?>
> >> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web
Application
> >2.3//EN"
> >>     "http://java.sun.com/dtd/web-app_2_3.dtd">
> >> <web-app>
> >>      <display-name>paralegal</display-name>
> >>      <description>Paralegal status</description>
> >>      <servlet>
> >>           <servlet-name>
> >>                LawSchools
> >>           </servlet-name>
> >>           <servlet-class>
> >>                LawSchools
> >>           </servlet-class>
> >>      </servlet>
> >>     <servlet-mapping>
> >>         <servlet-name>
> >>             LawSchools
> >>         </servlet-name>
> >>         <url-pattern>
> >>             /LawSchools
> >>         </url-pattern>
> >>     </servlet-mapping>
> >>      <taglib>
> >>
>
><taglib-uri>http://jakarta.apache.org/taglibs/application-1.0</taglib-uri>
> >>           <taglib-location>/WEB-INF/c.tld</taglib-location>
> >>      </taglib>
> >>   <resource-ref>
> >>       <description>My DB Connection</description>
> >>       <res-ref-name>jdbc/mydb</res-ref-name>
> >>       <res-type>javax.sql.DataSource</res-type>
> >>       <res-auth>Container</res-auth>
> >>   </resource-ref>
> >> </web-app>
> >> ************************
> >> And this is server.xml
> >>
> >> <?xml version='1.0' encoding='utf-8'?>
> >> <Server className="org.apache.catalina.core.StandardServer" debug="0"
> >port="8005" shutdown="SHUTDOWN">
> >>   <Listener
className="org.apache.catalina.mbeans.ServerLifecycleListener"
> >debug="0" jsr77Names="false"/>
> >>   <Listener
> >className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"
> >debug="0"/>
> >>   <GlobalNamingResources>
> >>     <Environment name="simpleValue" override="true"
> >type="java.lang.Integer" value="30"/>
> >>     <Resource auth="Container" description="User database that can be
> >updated and saved" name="UserDatabase" scope="Shareable"
> >type="org.apache.catalina.UserDatabase"/>
> >>     <Resource name="jdbc/profsysbackup" scope="Shareable"
> >type="javax.sql.DataSource"/>
> >>     <ResourceParams name="UserDatabase">
> >>       <parameter>
> >>         <name>factory</name>
> >>
<value>org.apache.catalina.users.MemoryUserDatabaseFactory</value>
> >>       </parameter>
> >>       <parameter>
> >>         <name>pathname</name>
> >>         <value>conf/tomcat-users.xml</value>
> >>       </parameter>
> >>     </ResourceParams>
> >>     <ResourceParams name="jdbc/mydb">
> >>       <parameter>
> >>         <name>maxWait</name>
> >>         <value>5000</value>
> >>       </parameter>
> >>       <parameter>
> >>         <name>maxActive</name>
> >>         <value>2</value>
> >>       </parameter>
> >>       <parameter>
> >>         <name>password</name>
> >>         <value>xxx</value>
> >>       </parameter>
> >>       <parameter>
> >>         <name>url</name>
> >>         <value>jdbc:sybase:Tds:xxx:5000</value>
> >>       </parameter>
> >>       <parameter>
> >>         <name>driverClassName</name>
> >>         <value>com.sybase.jdbc2.jdbc.SybDriver</value>
> >>       </parameter>
> >>       <parameter>
> >>         <name>maxIdle</name>
> >>         <value>2</value>
> >>       </parameter>
> >>       <parameter>
> >>         <name>username</name>
> >>         <value>xxx</value>
> >>       </parameter>
> >>     </ResourceParams>
> >>   </GlobalNamingResources>
> >>   <Service className="org.apache.catalina.core.StandardService"
debug="0"
> >name="Tomcat-Standalone">
> >>     <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
> >acceptCount="100" bufferSize="2048" compression="off"
connectionLinger="-1"
> >connectionTimeout="20000" debug="0" disableUploadTimeout="true"
> >enableLookups="true" maxKeepAliveRequests="100" maxProcessors="75"
> >minProcessors="5" port="8080"
> >protocolHandlerClassName="org.apache.coyote.http11.Http11Protocol"
> >proxyPort="0" redirectPort="8443" scheme="http" secure="false"
> >tcpNoDelay="true" useURIValidationHack="false">
> >>       <Factory
> >className="org.apache.catalina.net.DefaultServerSocketFactory"/>
> >>     </Connector>
> >>     <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
> >acceptCount="10" bufferSize="2048" compression="off"
connectionLinger="-1"
> >connectionTimeout="0" debug="0" disableUploadTimeout="false"
> >enableLookups="true" maxKeepAliveRequests="100" maxProcessors="75"
> >minProcessors="5" port="8009"
> >protocolHandlerClassName="org.apache.jk.server.JkCoyoteHandler"
> >proxyPort="0" redirectPort="8443" scheme="http" secure="false"
> >tcpNoDelay="true" useURIValidationHack="false">
> >>       <Factory
> >className="org.apache.catalina.net.DefaultServerSocketFactory"/>
> >>     </Connector>
> >>     <Engine className="org.apache.catalina.core.StandardEngine"
debug="0"
> >defaultHost="localhost"
> >mapperClass="org.apache.catalina.core.StandardEngineMapper"
> >name="Standalone">
> >>       <Host className="org.apache.catalina.core.StandardHost"
> >appBase="webapps" autoDeploy="true"
> >configClass="org.apache.catalina.startup.ContextConfig"
> >contextClass="org.apache.catalina.core.StandardContext" debug="0"
> >deployXML="true"
> >errorReportValveClass="org.apache.catalina.valves.ErrorReportValve"
> >liveDeploy="true"
mapperClass="org.apache.catalina.core.StandardHostMapper"
> >name="localhost" unpackWARs="true">
> >>         <Context className="org.apache.catalina.core.StandardContext"
> >cachingAllowed="true"
> >charsetMapperClass="org.apache.catalina.util.CharsetMapper"
cookies="true"
> >crossContext="false" debug="0" displayName="Tomcat Administration
> >Application" docBase="../server/webapps/admin"
> >mapperClass="org.apache.catalina.core.StandardContextMapper"
path="/admin"
> >privileged="true" reloadable="false" swallowOutput="false"
useNaming="true"
> >wrapperClass="org.apache.catalina.core.StandardWrapper">
> >>           <Logger className="org.apache.catalina.logger.FileLogger"
> >debug="0" directory="logs" prefix="localhost_admin_log." suffix=".txt"
> >timestamp="true" verbosity="1"/>
> >>         </Context>
> >>         <Context className="org.apache.catalina.core.StandardContext"
> >cachingAllowed="true"
> >charsetMapperClass="org.apache.catalina.util.CharsetMapper"
cookies="true"
> >crossContext="false" debug="0" displayName="Webdav Content Management"
> >docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\webdav"
> >mapperClass="org.apache.catalina.core.StandardContextMapper"
path="/webdav"
> >privileged="false" reloadable="false" swallowOutput="false"
useNaming="true"
> >wrapperClass="org.apache.catalina.core.StandardWrapper">
> >>         </Context>
> >>         <Context className="org.apache.catalina.core.StandardContext"
> >cachingAllowed="true"
> >charsetMapperClass="org.apache.catalina.util.CharsetMapper"
cookies="true"
> >crossContext="true" debug="0" displayName="paralegal"
> >docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\para"
> >mapperClass="org.apache.catalina.core.StandardContextMapper" path="/para"
> >privileged="false" reloadable="false" swallowOutput="false"
useNaming="true"
> >wrapperClass="org.apache.catalina.core.StandardWrapper">
> >>           <Resource auth="Container" description="DB Connection"
> >name="jdbc/mydb" scope="Shareable" type="javax.sql.DataSource"/>
> >>         </Context>
> >>         <Context className="org.apache.catalina.core.StandardContext"
> >cachingAllowed="true"
> >charsetMapperClass="org.apache.catalina.util.CharsetMapper"
cookies="true"
> >crossContext="true" debug="0" displayName="Tomcat Examples"
> >docBase="examples"
> >mapperClass="org.apache.catalina.core.StandardContextMapper"
> >path="/examples" privileged="false" reloadable="true"
swallowOutput="false"
> >useNaming="true" wrapperClass="org.apache.catalina.core.StandardWrapper">
> >>           <Logger className="org.apache.catalina.logger.FileLogger"
> >debug="0" directory="logs" prefix="localhost_examples_log." suffix=".txt"
> >timestamp="true" verbosity="1"/>
> >>           <Parameter name="context.param.name" override="false"
> >value="context.param.value"/>
> >>           <Ejb home="com.wombat.empl.EmployeeRecordHome"
> >name="ejb/EmplRecord" remote="com.wombat.empl.EmployeeRecord"
> >type="Entity"/>
> >>           <Ejb description="Example EJB Reference"
> >home="com.mycompany.mypackage.AccountHome" name="ejb/Account"
> >remote="com.mycompany.mypackage.Account" type="Entity"/>
> >>           <Environment name="maxExemptions" override="true"
> >type="java.lang.Integer" value="15"/>
> >>           <Environment name="foo/name4" override="true"
> >type="java.lang.Integer" value="10"/>
> >>           <Environment name="minExemptions" override="true"
> >type="java.lang.Integer" value="1"/>
> >>           <Environment name="foo/bar/name2" override="true"
> >type="java.lang.Boolean" value="true"/>
> >>           <Environment name="name3" override="true"
> >type="java.lang.Integer" value="1"/>
> >>           <Environment name="foo/name1" override="true"
> >type="java.lang.String" value="value1"/>
> >>           <LocalEjb description="Example Local EJB Reference"
> >home="com.mycompany.mypackage.ProcessOrderHome"
> >local="com.mycompany.mypackage.ProcessOrder" name="ejb/ProcessOrder"
> >type="Session"/>
> >>           <Resource auth="SERVLET" name="jdbc/EmployeeAppDb"
> >scope="Shareable" type="javax.sql.DataSource"/>
> >>           <Resource auth="Container" name="mail/Session"
scope="Shareable"
> >type="javax.mail.Session"/>
> >>           <ResourceParams name="jdbc/EmployeeAppDb">
> >>             <parameter>
> >>               <name>password</name>
> >>               <value></value>
> >>             </parameter>
> >>             <parameter>
> >>               <name>url</name>
> >>               <value>jdbc:HypersonicSQL:database</value>
> >>             </parameter>
> >>             <parameter>
> >>               <name>driverClassName</name>
> >>               <value>org.hsql.jdbcDriver</value>
> >>             </parameter>
> >>             <parameter>
> >>               <name>username</name>
> >>               <value>sa</value>
> >>             </parameter>
> >>           </ResourceParams>
> >>           <ResourceParams name="mail/Session">
> >>             <parameter>
> >>               <name>mail.smtp.host</name>
> >>               <value>localhost</value>
> >>             </parameter>
> >>           </ResourceParams>
> >>           <ResourceLink global="simpleValue"
name="linkToGlobalResource"
> >type="java.lang.Integer"/>
> >>         </Context>
> >>         <Context className="org.apache.catalina.core.StandardContext"
> >cachingAllowed="true"
> >charsetMapperClass="org.apache.catalina.util.CharsetMapper"
cookies="true"
> >crossContext="false" debug="0"
>
>docBase="c:/local/tomcat/jakarta-tomcat-4.1.24/webapps/application-examples
"
> >mapperClass="org.apache.catalina.core.StandardContextMapper"
> >path="/application-examples" privileged="false" reloadable="false"
> >swallowOutput="false" useNaming="true"
> >wrapperClass="org.apache.catalina.core.StandardWrapper">
> >>         </Context>
> >>         <Context className="org.apache.catalina.core.StandardContext"
> >cachingAllowed="true"
> >charsetMapperClass="org.apache.catalina.util.CharsetMapper"
cookies="true"
> >crossContext="false" debug="0" displayName="Tomcat Documentation"
> >docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\tomcat-docs"
> >mapperClass="org.apache.catalina.core.StandardContextMapper"
> >path="/tomcat-docs" privileged="false" reloadable="false"
> >swallowOutput="false" useNaming="true"
> >wrapperClass="org.apache.catalina.core.StandardWrapper">
> >>         </Context>
> >>         <Context className="org.apache.catalina.core.StandardContext"
> >cachingAllowed="true"
> >charsetMapperClass="org.apache.catalina.util.CharsetMapper"
cookies="true"
> >crossContext="true" debug="0" displayName="Shyly"
> >docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\shyly"
> >mapperClass="org.apache.catalina.core.StandardContextMapper"
path="/shyly"
> >privileged="false" reloadable="true" swallowOutput="false"
useNaming="true"
> >wrapperClass="org.apache.catalina.core.StandardWrapper">
> >>         </Context>
> >>         <Context className="org.apache.catalina.core.StandardContext"
> >cachingAllowed="true"
> >charsetMapperClass="org.apache.catalina.util.CharsetMapper"
cookies="true"
> >crossContext="false" debug="0" displayName="Tomcat Manager Application"
> >docBase="../server/webapps/manager"
> >mapperClass="org.apache.catalina.core.StandardContextMapper"
path="/manager"
> >privileged="true" reloadable="false" swallowOutput="false"
useNaming="true"
> >wrapperClass="org.apache.catalina.core.StandardWrapper">
> >>           <ResourceLink global="UserDatabase" name="users"
> >type="org.apache.catalina.UserDatabase"/>
> >>         </Context>
> >>         <Context className="org.apache.catalina.core.StandardContext"
> >cachingAllowed="true"
> >charsetMapperClass="org.apache.catalina.util.CharsetMapper"
cookies="true"
> >crossContext="false" debug="0" displayName="Welcome to Tomcat"
> >docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\ROOT"
> >mapperClass="org.apache.catalina.core.StandardContextMapper" path=""
> >privileged="false" reloadable="false" swallowOutput="false"
useNaming="true"
> >wrapperClass="org.apache.catalina.core.StandardWrapper">
> >>         </Context>
> >>         <Logger className="org.apache.catalina.logger.FileLogger"
> >debug="9" directory="logs" prefix="localhost_log." suffix=".txt"
> >timestamp="true" verbosity="4"/>
> >>       </Host>
> >>       <Logger className="org.apache.catalina.logger.FileLogger"
debug="0"
> >directory="logs" prefix="catalina_log." suffix=".txt" timestamp="true"
> >verbosity="1"/>
> >>       <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
> >debug="0" resourceName="UserDatabase" validate="true"/>
> >>     </Engine>
> >>   </Service>
> >> </Server>
> >>
> >> At 02:12 PM 4/17/2003 -0400, you wrote:
> >>
> >> >Hi Shyly,
> >> >
> >> >It looks like you have everything right.
> >> >
> >> >You are not missing an environment variable, assuming you meant
> >> >%CATALINA_HOME% and not %TOMCAT% or %CATALINA_HOM% below.
> >> >
> >> >Do you have the context entry in server.xml inside <host>?
> >> >
> >> >Also do you have the <resource-ref> in the right place in the web.xml
> >file?
> >> >Those entries have to be in the right order.
> >> >
> >> >It has to be after </error-page> and before <security-constraint>.
> >> >
> >> >Can you post (or send directly) you entire server.xml and web.xml
files
> >> >after sanitizing them?
> >> >
> >> >Rick
> >>
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
> >> For additional commands, e-mail: tomcat-user-help@jakarta.apache.org
> >>
> >
> >
> >---------------------------------------------------------------------
> >To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
> >For additional commands, e-mail: tomcat-user-help@jakarta.apache.org
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tomcat-user-help@jakarta.apache.org
>


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-user-help@jakarta.apache.org


Re: general server performance (was Re: apache svn server memory usage?)

Posted by Philip Martin <ph...@codematters.co.uk>.
Chris Hecker <ch...@d6.com> writes:

> Right, but even checkouts seem pokey...are they considered
> transactions as far as disk syncing as well (I assume not)?

Checkouts are transactions.

> Also, is there any way to trade risk for performance, and have it
> not sync to disk as often, or schedule it for the background, etc.?

Perhaps 'svnadmin create --bdb-txn-nosync' is what you want?  You can
alter an existing repository by setting DB_TXN_NOSYNC/DB_TXN_WRITE_NOSYNC
in the repository's DB_CONFIG.

-- 
Philip Martin

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: general server performance (was Re: apache svn server memory usage?)

Posted by Branko Čibej <br...@xbc.nu>.
Chris Hecker wrote:

>
>> > Right, but even checkouts seem pokey...are they considered
>> > transactions as far as disk syncing as well (I assume not)?
>> I'm talking about database transactions, and yes, quite a few of those
>> take place during checkout.
>
>
> Ah, doesn't it seem a bit wrong to be doing logged transactions for
> read-only operations (like up and co)?  It seems like there'd be a
> lighter weight BDB process for that.

:-) It's not that simple. A SVN checkout or update isn't just about
reading from the database.

Of course, there are lots of places in the code where we could (and IMHO
should) stop using transactions.There's even an issue about this (409),
but as I've said before elsewhere, this is anything but a trivial thing
to do. It involves big changes in the FS implementation, and we can't
afford to do those ATM.


-- 
Brane Čibej   <br...@xbc.nu>   http://www.xbc.nu/brane/


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Re: Cannot load JDBC driver class

Posted by Rick Fincher <rn...@tbird.com>.
Great! Glad it worked.  Yes, your connections will be pooled now.  You just
have to be sure to explicitly close all statements and result sets with
pooled connections.  A lot of people rely on closing the connection to do
that, and since the connection doesn't actually get closed by the pool it
will cause you to run out of connections.

Rick

> You're beautiful!
>
> I already had the DBCP jar files in place, but following your suggestions,
I used the xml for oracle rather than mysql and moved everything out of the
server.xml's global JNDI into context, and it worked! (Got Connection
org.apache.commons.dbcp.PoolableConnection@4rfg345).  I guess this means I
can make pooled connections within a webapp, which is much more than I could
do before.
>
> Thank you so much for all of your help - I could not have done this
without you.
> Shyly
>


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-user-help@jakarta.apache.org


Re: Apache::Session::MySQL, light/heavy proxy, wedging

Posted by Perrin Harkins <ph...@gmail.com>.
On 1/31/07, Todd Finney <tf...@boygenius.com> wrote:
> I can set it up so that it does the copy->save to pnotes dance for every
> one of the variables, except perhaps for the actual session handle, which
> is stuck into pnotes('SESSION_HANDLE').

If you really need to keep a ref to $session like that, then you
definitely have to use the is_initial_req approach or turn off the
exclusive locking.

- Perrin

Re: Apache::Session::MySQL, light/heavy proxy, wedging

Posted by Todd Finney <tf...@boygenius.com>.
At 09:36 AM 1/31/2007 -0500, Perrin Harkins wrote:
>On 1/31/07, Todd Finney <tf...@boygenius.com> wrote:
>>It's responsible for making sure that the client has a session, and it
>>takes any of the values in the session and stores them in pnotes.
>
>Are you sure that you had removed all of these when you did the test
>of copying the session_id, and it still had the same problem?

Now that you mention it, I'm fairly certain that I didn't.  That stab was 
before I moved to the test case, which means that all of that stuff would 
have still been in there.  Damn my damnable dumbassery.

I can set it up so that it does the copy->save to pnotes dance for every 
one of the variables, except perhaps for the actual session handle, which 
is stuck into pnotes('SESSION_HANDLE').

It would appear the using is_initial_req is the best possible solution to 
all of this.







Re: svn diff on renamed files

Posted by kf...@collab.net.
Chris Hecker <ch...@d6.com> writes:
> Obviously getting 1 fixes this particular case for 2, but this one
> seems dead simple and I'm assuming there are other reasons to want to
> access the current directory in the repository.  Same answer for
> this...

Yeah, I thought so too.  But then I tried to come up with one and I
couldn't (I mean, one that wouldn't be satisfied by just referring to
the working copy path.)

> > > > 3.  If there's an @ symbol on a non-full-url'd file name, still look
> > > > for it at that revision number in the repository.  In other words,
> > > > old.txt is not in my wc, but old.txt@3 makes total sense
> > > > contextually, even though there's no old.txt in my wc.  This would
> > > > save a lot of typing.
> >Or: if we have (2), is (3) really necessary :-) ?
> 
> I think 3 is the "better" feature than 2, so picking one I'd pick 3
> (say that sentence 5 times fast :).

I did, and included the part about "5 times fast" for good measure :-) !

> They're both just handy UI
> shortcuts, and accomplish the same thing, but not vital.
> 
> As we both say, getting 1 would alleviate the need for 2 and 3 for
> this specific problem, but I've been using svn for all of 2 days, so I
> don't have a good feel for other situations in which they'd be
> desirable and how often you need to type the full url.

I think one of these would be useful, but personally would put them in
the Post-1.0 bucket right now.  You could file an enhancement request
and put it in that milestone... Or if you wait to wait till you've
been using Subversion longer and see how you feel then, that works
too.

-K

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: general server performance (was Re: apache svn server memory usage?)

Posted by Chris Hecker <ch...@d6.com>.
> > Right, but even checkouts seem pokey...are they considered
> > transactions as far as disk syncing as well (I assume not)?
>I'm talking about database transactions, and yes, quite a few of those
>take place during checkout.

Ah, doesn't it seem a bit wrong to be doing logged transactions for 
read-only operations (like up and co)?  It seems like there'd be a lighter 
weight BDB process for that.

Chris



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: One service with operations in different namespaces

Posted by Anne Thomas Manes <an...@manes.net>.
You must use WSDL2Java to generate your server skeleton.

At 10:12 PM 9/25/2003 +0200, you wrote:
>At the beggining I'd like to thank you for help!
>
>OK, this is WSDL document. Far I understand documentation,  the wsdlFile 
>atribute in deployment is just for returning WSDL, not for Axis service 
>configuration, isn't it? I'm afraid I will still get errors. Am I wrong?
>
>One more time thanks for help!
>Marcin
>
>>You define the schemas in the <types> section of the WSDL document. For 
>>example:
>><wsdl:definitions name='twoNamepsaces'
>>     targetNamespace='urn:twonamespaces/wsdl'
>>     xmlns:soap='http://schemas.xmlsoap.org/wsdl/soap/'
>>     xmlns:wsdl='http://schemas.xmlsoap.org/wsdl/'
>>     xmlns:ns1='urn:twoNamespaces/ns1'
>>     xmlns:ns2='urn:twoNamespaces/ns2'
>>     xmlns:tns='urn:twoNamespaces/wsdl'>
>>     <wsdl:types>
>>         <xsd:schema targetNamespace='urn:twoNamespaces/ns1'
>>             xmlns:xsd='http://www.w3.org/2001/XMLSchema'>
>>             <xsd:element name='op1' type='string'/>
>>         </xsd:schema>
>>         <xsd:schema targetNamespace='urn:twoNamespaces/ns2'
>>             xmlns:xsd='http://www.w3.org/2001/XMLSchema'>
>>             <xsd:element name='op2' type='string'/>
>>         </xsd:schema>
>>     </wsdl:types>
>>     <wsdl:message name='op1'>
>>         <wsdl:part name='body' element='ns1:op1'/>
>>     </wsdl:message>
>>     <wsdl:message name='op2'>
>>         <wsdl:part name='body' element='ns2:op2'/>
>>     </wsdl:message>
>>     <wsdl:portType name='interface'>
>>         <wsdl:operation name='op1'>
>>             <wsdl:input message='tns:op1'/>
>>         </wsdl:operation>
>>         <wsdl:operation name='op2'>
>>             <wsdl:input message='tns:op2'/>
>>         </wsdl:operation>
>>     </wsdl:portType>
>>     <wsdl:binding name='interfaceSOAP' type='tns:interface'>
>>         <soap:binding
>>             transport='http://schemas.xmlsoap.org/soap/http'
>>             style='document'/>
>>         <wsdl:operation name='op1'>
>>             <soap:operation
>>               soapAction='op1'
>>               style='document'/>
>>             <wsdl:input>
>>                 <soap:body use='literal'/>
>>             </wsdl:input>
>>         </wsdl:operation>
>>         <wsdl:operation name='op2'>
>>             <soap:operation
>>               soapAction='op2'
>>               style='document'/>
>>             <wsdl:input>
>>                 <soap:body use='literal'/>
>>             </wsdl:input>
>>         </wsdl:operation>
>>     </wsdl:binding>
>
>--
>-------------------------------------------------------------
>                       Marcin Okraszewski
>okrasz@o2.pl                                       GG: 341942
>okrasz@vlo.ids.gda.pl          PGP: www.okrasz.prv.pl/pgp.asc
>-------------------------------------------------------------
>



Re: general server performance

Posted by Greg Stein <gs...@lyra.org>.
On Wed, Jul 02, 2003 at 01:04:37AM +0200, Branko ??ibej wrote:
>...
> You can set the DB_TXN_NOSYNC option in DB_CONFIG, but of course if you
> do that, you're prone to irrecoverable database corruption if anything
> goes wrong with your system.

It would be *really* cool if we could adjust that on a per-FS-open basis.
"I'm going to do some work which can be lost." That would be just perfect
for update reports, where a crash on the server will cause the client to
simply restart. Any data loss is no big deal.

(of course, we don't want to have the BDB lose integrity and need to be
 recovered; we really want something that says "don't be loggy")

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: general server performance (was Re: apache svn server memory usage?)

Posted by Branko Čibej <br...@xbc.nu>.
Chris Hecker wrote:

>
>> Caching doesn't help you when you have to fsync the database log files
>> at every transaction commit.
>
>
> Right, but even checkouts seem pokey...are they considered
> transactions as far as disk syncing as well (I assume not)?

I'm talking about database transactions, and yes, quite a few of those
take place during checkout.

> Also, is there any way to trade risk for performance, and have it not
> sync to disk as often, or schedule it for the background, etc.?

You can set the DB_TXN_NOSYNC option in DB_CONFIG, but of course if you
do that, you're prone to irrecoverable database corruption if anything
goes wrong with your system.

> But even ignoring that, what explains why the net throughput is so low?

Throughtput is amount of data sent divided by the time it takes to send
it. If it takes longer to have the data available because the server
blocks on disk I/O, then of course that'll lower your throughput.


-- 
Brane Čibej   <br...@xbc.nu>   http://www.xbc.nu/brane/


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

RE: RE: Release/Branching best practices

Posted by James Oltmans <JO...@bolosystems.com>.
What's the advantages of releasing from tags rather than the trunk? We
promote tags off the trunk but it looks like you create the release and
then promote that. When do you start putting code into the release?

 

________________________________

From: Ian Wood [mailto:Ian.Wood@sucden.co.uk] 
Sent: Thursday, November 29, 2007 4:55 AM
To: James Oltmans
Cc: users@subversion.tigris.org
Subject: RE: Release/Branching best practices

 

>current solution is to require that developers specify in their log
messages (enforced via a pre-commit hook) which release their
fix/project belongs to. This should help us scrape the log messages to
identify which projects and bugs went into a release.

 

We do a similar thing using a prefix before each commit. 

 

In the past we also exported the log of each path, i.e. branch to see
what work was done in there using the prefix to group commits together. 

 

I noticed the other day that you can link TSVN with issue trackers -
maybe that will be a way to go. 

 

Another difference we have is that we release from Tags rather than the
Trunk. 

________________________________

From: James Oltmans [mailto:JOltmans@bolosystems.com] 
Sent: 29 November 2007 01:45
To: Ian Wood
Cc: users@subversion.tigris.org
Subject: RE: Release/Branching best practices

 

Thanks for your response Ian.

Currently we do something like in picture at the bottom. We spawn
projects (p1222) as branches off of releases (prd_7.2.2).

Releases are cut from the trunk. Bugs and projects are moved to the
trunk when they are complete and approved. Our issue is that we have
trouble identifying what is a part of each release. For instance, if we
had 12 bugs come from the bugs branch and 2 projects come in then cut
our release and moved 3 bugs in to the trunk and then found issues with
the release. We fix the release and then merge back to the trunk but
this starts to get convoluted as to which commits belong to which
release. Our current solution is to require that developers specify in
their log messages (enforced via a pre-commit hook) which release their
fix/project belongs to. This should help us scrape the log messages to
identify which projects and bugs went into a release. 

 

Our other issue is keeping the bugs branch and trunk in-synch. With each
release we merge everything over to bugs (generally this will reapply
fixed bugs and move projects over). However, there's never a guarantee
that it won't screw up the bugs team's development process.

 

Note: We used to have a separate QA branch to keep the trunk always
stable for spawning projects, but there's no point in doing that since
we now spawn projects off the required release branch.

 
<http://sp1/rd/scm/Help%20images/Source%20PNGs/Merge%20Outline%20Detaile
d_v3.0.png> 

 

________________________________

From: Ian Wood [mailto:Ian.Wood@sucden.co.uk] 
Sent: Wednesday, November 28, 2007 2:33 AM
To: James Oltmans; users@subversion.tigris.org
Subject: RE: Release/Branching best practices

 

Hi James,

 

This is how we do it. 

 

We have a repo as below.

 

>Trunk

>Branches

 >Versions

  >1_0_0

  >1_0_1

>Tags

 >SuccessfulBuilds

 >1_0_0

  >1_0_0_1

  >1_0_0_2

 

The main work is done on the Trunk. Then each month we make a Version
branch of the current months version, ( just the first three numbers,
the forth is determined by the CruiseControl machine ).

 

The code on this version branch is released to the test team and tested
and any bugs found are then fixed on that branch and released again. 

 

Then when that code is released to live the changes made are merged back
to the Trunk and another branch is taken. 

 

Indecently each time the Version branch builds successfully a Tag is
taken with the version number and a deployment script is created. 

 

We are not finding it too burdensome, the only problem we have found is
when people make changes to the same code in both places without merging
as they go. 

 

What are currently doing?

 

Best regards,

 

Ian

 

 

 

 

 

________________________________

From: James Oltmans [mailto:JOltmans@bolosystems.com] 
Sent: 28 November 2007 00:55
To: users@subversion.tigris.org
Subject: Release/Branching best practices

 

Hello all,

 

Could someone point me in the right direction for finding best-practices
or software to manage releases? We are trying to use a monthly release
cycle and our current branch and merge management is becoming a bit
burdensome.

 

Thanks,
James

 

www.sucden.co.uk <http://www.sucden.co.uk/> 

Sucden (UK) Limited, 5 London Bridge Street, London SE1 9SG
Telephone +44 20 7940 9400
 
Registered in England no. 1095841
VAT registration no. GB 446 9061 33

Authorised and Regulated by the Financial Services Authority (FSA) and
entered in the FSA register under no. 114239

 

This email, including any files transmitted with it, is confidential and
may be privileged. It may be read, copied and used only by the intended
recipient. If you are not the intended recipient of this message, please
notify postmaster@sucden.co.uk immediately and delete it from your
computer system.

 

We believe, but do not warrant, that this email and its attachments are
virus-free, but you should check. 

 

Sucden (UK) Ltd may monitor traffic data of both business and personal
emails. By replying to this email, you consent to Sucden's monitoring
the content of any emails you send to or receive from Sucden. Sucden is
not liable for any opinions expressed by the sender where this is a
non-business email.

The contents of this e-mail do not constitute advice and should not be
regarded as a recommendation to buy, sell or otherwise deal with any
particular investment.

This message has been scanned for viruses by BlackSpider MailControl
<http://www.blackspider.com/> 

 

www.sucden.co.uk <http://www.sucden.co.uk/> 

Sucden (UK) Limited, 5 London Bridge Street, London SE1 9SG
Telephone +44 20 7940 9400
 
Registered in England no. 1095841
VAT registration no. GB 446 9061 33

Authorised and Regulated by the Financial Services Authority (FSA) and
entered in the FSA register under no. 114239

 

This email, including any files transmitted with it, is confidential and
may be privileged. It may be read, copied and used only by the intended
recipient. If you are not the intended recipient of this message, please
notify postmaster@sucden.co.uk immediately and delete it from your
computer system.

 

We believe, but do not warrant, that this email and its attachments are
virus-free, but you should check. 

 

Sucden (UK) Ltd may monitor traffic data of both business and personal
emails. By replying to this email, you consent to Sucden's monitoring
the content of any emails you send to or receive from Sucden. Sucden is
not liable for any opinions expressed by the sender where this is a
non-business email.

The contents of this e-mail do not constitute advice and should not be
regarded as a recommendation to buy, sell or otherwise deal with any
particular investment.

This message has been scanned for viruses by BlackSpider MailControl
<http://www.blackspider.com/> 


Re: cvs commit: apr STATUS

Posted by "William A. Rowe, Jr." <wr...@rowe-clan.net>.
At 05:17 PM 4/2/2002, you wrote:
>Here are the revision numbers:
>
>apr/build/apr_hints.m4, revision 1.39
>apr/locks/unix/proc_mutex.c, revision 1.13

Now adopted by APACHE_2_0_34; thanks!


Re: [users@httpd] Virtual host causing problems with default server

Posted by Robert Moskowitz <rg...@htt-consult.com>.
At 03:45 PM 7/10/2003 +0200, Robert Andersson wrote:

>Is home.com defined as a virtual host? If that is your "main server" and it
>isn't sitting on a different IP than your virtual hosts, you won't ever
>reach, due to reasons previously explained. If you are using name-based
>virtual hosting, you must define each site/domain as a virtual host. With a
>few exceptions, once you start adding virtual hosts, your "main
>configuration" only becomes the default configuration for the virtual hosts;
>not a host of its own.
>
>If you still have problems, please summarize your case again, since this has
>become a bit confusing.

Boy, was I confusicated by the docs that I read.  Thanks to a doc sent to 
me by Patrick Donker, I figured out all I was doing wrong....

Both the NameVirtualHost and the <VirtualHost _____> are suppose to have IP 
addresses for the server, and only FQDNs if you want to use DNS at startup 
to resolve names to addresses.  Dah.

What distinguishes multiple <VirtualHost _______> blocks is the content of 
the ServerName entry.  This was not clear from the main VirtualHost doc 
page, their it seemed like you needed separate IP addresses for each 
virtual host (which was once the case).

So it is working now.  I have my NameVirtualHost entry and 3 <VirtualHost 
_____> blocks.

thanks all!


The only person who always got his work done by Friday was Robinson Crusoe.


---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Virtual host causing problems with default server

Posted by Robert Andersson <ro...@profundis.nu>.
Robert Moskowitz wrote:
> I originally did not use the command:
>
> NameVirtualHost
>
> I added it both trying:
>
> NameVirtualHost www.home.com
>
> and
>
> NameVirtualHost home.com
>
> with no change.

Does www.home.com and home.com resolve to different IPs? If not, why do you
have them both? NameVirtualHost tells Apache which IP/port combinations to
use for name-based virtual hosting. It is always prefered to use the IP
instead of hostname:
http://httpd.apache.org/docs-2.0/mod/core.html#namevirtualhost

> I tried adding:
>
> <VirtualHost home.com>
>      ServerAdmin abc@home.com
>      DocumentRoot "D:/Pages"
>      ServerName abc.org:80
>      ErrorLog logs/error.log
>      CustomLog logs/access.log common
> </VirtualHost>
>
> And this produced some error that flashed too quick across the screen, but
> something like
>
> 'this conflicts with the default server'

If this was in addition to the other VHost, the problem is that you have
given two VHosts the same ServerName, which, of course, is conflicting.

> and still no access to home.com

Is home.com defined as a virtual host? If that is your "main server" and it
isn't sitting on a different IP than your virtual hosts, you won't ever
reach, due to reasons previously explained. If you are using name-based
virtual hosting, you must define each site/domain as a virtual host. With a
few exceptions, once you start adding virtual hosts, your "main
configuration" only becomes the default configuration for the virtual hosts;
not a host of its own.

If you still have problems, please summarize your case again, since this has
become a bit confusing.

Regards,
Robert Andersson




---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Virtual host causing problems with default server

Posted by Robert Moskowitz <rg...@htt-consult.com>.
At 06:52 AM 7/9/2003 +0200, Robert Andersson wrote:
>Aaron Morris wrote:
> > While this may be true for Name based VirtualHosts (it makes sense, I
> > just never had a problem with it :) ), the same cannot be said for IP or
> > port based VirtualHosts.  If a VirtualHost is not matched in an IP based
> > setup, then the global server config is used.
>
>True, my mistake. Thanks for pointing it out. I had forgotten that
>difference, although it makes perfect sense, and assumed that the OP was
>utilizing name-based virtual hosts (which I dare guess still is true).

As I showed in my message I have:

<VirtualHost abc.org>
     ServerAdmin abc@abc.org
     DocumentRoot "D:/Pages-abc"
     ServerName abc.org:80
     ErrorLog logs/abc.org-error.log
     CustomLog logs/abc.org-access.log common
</VirtualHost>

I originally did not use the command:

NameVirtualHost

I added it both trying:

NameVirtualHost www.home.com

and

NameVirtualHost home.com

with no change.

I tried adding:

<VirtualHost home.com>
     ServerAdmin abc@home.com
     DocumentRoot "D:/Pages"
     ServerName abc.org:80
     ErrorLog logs/error.log
     CustomLog logs/access.log common
</VirtualHost>

And this produced some error that flashed too quick across the screen, but 
something like

'this conflicts with the default server'

and still no access to home.com

If I take out the virtualhost section, the default server works, so I know 
that is the problem...



---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Virtual host causing problems with default server

Posted by Robert Andersson <ro...@profundis.nu>.
Aaron Morris wrote:
> While this may be true for Name based VirtualHosts (it makes sense, I
> just never had a problem with it :) ), the same cannot be said for IP or
> port based VirtualHosts.  If a VirtualHost is not matched in an IP based
> setup, then the global server config is used.

True, my mistake. Thanks for pointing it out. I had forgotten that
difference, although it makes perfect sense, and assumed that the OP was
utilizing name-based virtual hosts (which I dare guess still is true).

Regards,
Robert Andersson




---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Virtual host causing problems with default server

Posted by Aaron Morris <aa...@mindspring.com>.
Robert Andersson wrote:

> What you may be missing, is that as soon as you define a virtual host, your
> "main server" cease to exist. You cannot have both a "main server" and one
> or more virtual hosts; the modes are mutally exclusive. In fact, the first
> defined virtual host becomes the default/main server for requests that can't
> be matched to another virtual host.

While this may be true for Name based VirtualHosts (it makes sense, I 
just never had a problem with it :) ), the same cannot be said for IP or 
port based VirtualHosts.  If a VirtualHost is not matched in an IP based 
setup, then the global server config is used.

-- 
Aaron W Morris <aa...@mindspring.com> (decep)
PGP Key ID:  259978D1



---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Virtual host causing problems with default server

Posted by Robert Andersson <ro...@profundis.nu>.
Robert Moskowitz wrote:
> I can't figure out what I am missing.  And I think I must be missing
> something from my Main Server section to cause a URL NOT to the
> virtual host to still go to the virtual host.

What you may be missing, is that as soon as you define a virtual host, your
"main server" cease to exist. You cannot have both a "main server" and one
or more virtual hosts; the modes are mutally exclusive. In fact, the first
defined virtual host becomes the default/main server for requests that can't
be matched to another virtual host.


Regards,
Robert Andersson




---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: task : from which directory is the java command started

Posted by Antoine Lévy-Lambert <an...@antbuild.com>.
Rudolf Nottrott wrote:

> Thanks Antoine for the suggestion, but it doesn't seem to make a 
> difference.
>
> Basically, I need to make sure that the command
>
> 'java org.hsqldb.Server -database testxyz'
>
> which is run by the ant target below, is started from the current 
> directory, the directory in which the database files are.
>
> How can I control the directory in which a  <java task gets started?  
> Is there a parameter for that?
>
> Thanks,
> Rudolf
>
>
>
>
 From the manual, there is a "dir" attribute for the javac task. Note 
that it requires also fork="true", otherwise it is ignored.


Antoine


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@ant.apache.org
For additional commands, e-mail: user-help@ant.apache.org


RE: [PATCH] Cache open repository per connection

Posted by Sander Striker <st...@apache.org>.
> From: Mukund [mailto:mukund@tessna.com]
> Sent: Wednesday, July 02, 2003 2:01 PM

> On Tue, Jul 01, 2003 at 07:25:54PM -0700, Greg Stein wrote:
> | > >    Sander has experimented with this, but it didn't seem to do much.
> | 
> | Bugs :-)
> | 
> 
> Sander, can you comment on Greg's message and if any changes to the patch
> are due, please make them. I would really appreciate this particular
> patch.

Yes, as soon as I get back tonight.

Sander

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: [PATCH] Cache open repository per connection

Posted by Mukund <mu...@tessna.com>.
On Tue, Jul 01, 2003 at 07:25:54PM -0700, Greg Stein wrote:
| > >    Sander has experimented with this, but it didn't seem to do much.
| 
| Bugs :-)
| 

Sander, can you comment on Greg's message and if any changes to the patch
are due, please make them. I would really appreciate this particular
patch.

Mukund


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

RE: [PATCH] Cache open repository per connection

Posted by Sander Striker <st...@apache.org>.
> From: Greg Stein [mailto:gstein@lyra.org]
> Sent: Wednesday, July 02, 2003 4:26 AM

> Bugs :-)

'lets :)  Perceptions... ;)  Okay, okay, it was a hack.  Happy now? ;P :)

>>...
>> +++ subversion/mod_dav_svn/repos.c      (working copy)
>>...
>> +  /* Get the repository */
>> +  base_path = apr_pstrndup(r->pool, r->uri,
>> +                           ap_find_path_info(r->uri, r->path_info));
> 
> Hmm. I'm not really sure what this is extracting from the URL. A clearer
> comment might be helpful.

heh heh, it extracts the Location part.  I came across this piece of
code a while ago in the httpd codebase and it seems about the only
way to get to the Location part, apart from storing it in the dir config
at dir config creation time and retrieving it from there later on.
 
>> +  repos_key = apr_pstrcat(r->pool, "mod_dav_svn:", base_path, root_path);
> 
> Might want to put a ,NULL on the end there. Otherwise, your key is random
> and will never get a cache-hit :-)

Crap!  :)  You're right.  This would explain why I never saw any speedup ;) :)
 
> In any case, I'd recommend caching on the fs_path instead of the URI.

Nope.  See the discussion on list.  Greg Hudson was best able to express why
that can be a bad idea.  Basically it comes down to being able to map multiple
Locations to the same repository.  Come to think of it, root_path will prolly
do just fine... but we might consider blending in the hostname aswell.
 
>> +  repos->repos = (void *)apr_table_get(r->connection->notes, repos_key);
> 
> I'd recommend using r->connection->pool's userdata instead of the notes.
> Tables are not meant to store binary values; I'm not sure that it is very
> reliable.

Oh come one, live a little ;).

Sidenote: we need to fix the apr docs:

 * Tables are used to store entirely opaque structures
 * for applications, while Arrays are usually used to
 * deal with string lists.

Ofcourse this isn't true when you are using the add/set functions, as oposed
to addn/setn, since those try to copy the data.

> You could (again) see corrupted data or cache misses.

Not in this case.  No copying of the reference takes place.  But I agree that
using the connection pools userdata is cleaner.  New patch below.


Sander

Index: subversion/mod_dav_svn/repos.c
===================================================================
--- subversion/mod_dav_svn/repos.c      (revision 6386)
+++ subversion/mod_dav_svn/repos.c      (working copy)
@@ -1076,6 +1076,7 @@
   const char *repos_name;
   const char *relative;
   const char *repos_path;
+  const char *repos_key;
   const char *version_name;
   svn_error_t *serr;
   dav_error *err;
@@ -1181,15 +1182,27 @@
   /* Remember who is making this request */
   repos->username = r->user;

-  /* open the SVN FS */
-  serr = svn_repos_open(&(repos->repos), fs_path, r->pool);
-  if (serr != NULL)
+  /* Cache open repository.  Key it off by root_path, which should be more
+   * unique than the fs_path, given that two Locations may point to the
+   * same repository.
+   */
+  repos_key = apr_pstrcat(r->pool, "mod_dav_svn:", root_path, NULL);
+  apr_pool_userdata_get((void **)&repos->repos, repos_key, r->connection->pool);
+  if (repos->repos == NULL)
     {
-      return dav_svn_convert_err(serr, HTTP_INTERNAL_SERVER_ERROR,
-                                 apr_psprintf(r->pool,
-                                              "Could not open the SVN "
-                                              "filesystem at %s",
-                                              fs_path));
+      serr = svn_repos_open(&(repos->repos), fs_path, r->connection->pool);
+      if (serr != NULL)
+        {
+          return dav_svn_convert_err(serr, HTTP_INTERNAL_SERVER_ERROR,
+                                     apr_psprintf(r->pool,
+                                                  "Could not open the SVN "
+                                                  "filesystem at %s",
+                                                  fs_path));
+        }
+
+      /* Cache the open repos for the next request on this connection */
+      apr_pool_userdata_set(repos->repos, repos_key,
+                            NULL, r->connection->pool);
     }

   /* cache the filesystem object */

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: [PATCH] Cache open repository per connection

Posted by Greg Stein <gs...@lyra.org>.
On Wed, Jul 02, 2003 at 01:26:24AM +0200, Sander Striker wrote:
> > From: sussman@collab.net [mailto:sussman@collab.net]
> > Sent: Tuesday, July 01, 2003 5:43 PM
> 
> > 2. As was already mentioned, because HTTP request is stateless, apache
> >    opens and closes/syncs the repository (BDB environment) with
> >    *every* request.  (One user had write caching turned off on his
> >    server;  this caused his http checkouts to arrive about 1 file
> >    every 2 seconds!)  There's been discusion about keeping the
> >    repository open for the whole TCP/IP "connection session", and
> >    Sander has experimented with this, but it didn't seem to do much.

Bugs :-)

>...
> +++ subversion/mod_dav_svn/repos.c      (working copy)
>...
> +  /* Get the repository */
> +  base_path = apr_pstrndup(r->pool, r->uri,
> +                           ap_find_path_info(r->uri, r->path_info));

Hmm. I'm not really sure what this is extracting from the URL. A clearer
comment might be helpful.

> +  repos_key = apr_pstrcat(r->pool, "mod_dav_svn:", base_path, root_path);

Might want to put a ,NULL on the end there. Otherwise, your key is random
and will never get a cache-hit :-)

In any case, I'd recommend caching on the fs_path instead of the URI.

> +  repos->repos = (void *)apr_table_get(r->connection->notes, repos_key);

I'd recommend using r->connection->pool's userdata instead of the notes.
Tables are not meant to store binary values; I'm not sure that it is very
reliable. You could (again) see corrupted data or cache misses.

Cheers,
-g

-- 
Greg Stein, http://www.lyra.org/

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: general server performance (was Re: apache svn server memory usage?)

Posted by Branko Čibej <br...@xbc.nu>.
Mukund wrote:

>On Wed, Jul 02, 2003 at 06:10:12PM +0200, Branko Čibej wrote:
>| Let's just make one thing clear here -- the fsync that happens at every
>| BDB transaction commit has nothing to do with how many times you open
>| the database. Keeping the DB open will help, yes, but it won't
>| significantly decrease the number of fsyncs.
>
>Hi Branko
>
>Perhaps you have not understood what I had meant. You can disable the
>fsync which happens at every transaction commit, using the DB_TXN_NOSYNC
>option. But however, when you close the DB, the fsync still happens.
>
>When the DB is opened and closed at every request, the whole point of
>DB_TXN_NOSYNC is defeated, as you are literally syncing every small bunch of
>transactions per request as they happen. In an active repository, this
>keeps the disk constantly busy. Keeping an open connection pool
>helps in this case.
>
>I wonder if the sync at DB close can be disabled.
>  
>
It can, but not with an option in DB_CONFIG. You can pass the DB_NOSYNC
flag to the DB->close function. I would recommend against doing that in
our code, though.

-- 
Brane Čibej   <br...@xbc.nu>   http://www.xbc.nu/brane/


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: general server performance (was Re: apache svn server memory usage?)

Posted by Mukund <mu...@tessna.com>.
On Wed, Jul 02, 2003 at 06:10:12PM +0200, Branko Čibej wrote:
| Let's just make one thing clear here -- the fsync that happens at every
| BDB transaction commit has nothing to do with how many times you open
| the database. Keeping the DB open will help, yes, but it won't
| significantly decrease the number of fsyncs.

Hi Branko

Perhaps you have not understood what I had meant. You can disable the
fsync which happens at every transaction commit, using the DB_TXN_NOSYNC
option. But however, when you close the DB, the fsync still happens.

When the DB is opened and closed at every request, the whole point of
DB_TXN_NOSYNC is defeated, as you are literally syncing every small bunch of
transactions per request as they happen. In an active repository, this
keeps the disk constantly busy. Keeping an open connection pool
helps in this case.

I wonder if the sync at DB close can be disabled.

Mukund


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: general server performance (was Re: apache svn server memory usage?)

Posted by Branko Čibej <br...@xbc.nu>.
Mukund wrote:

>On Tue, Jul 01, 2003 at 10:42:43AM -0500, Ben Collins-Sussman wrote:
>|    every 2 seconds!)  There's been discusion about keeping the
>|    repository open for the whole TCP/IP "connection session", and
>|    Sander has experimented with this, but it didn't seem to do much.
>|    Still need to investigate.
>
>Hi Sussman
>
>I am going to try this patch when Sander looks at Greg Stein's comments
>(in this thread) for his patch and releases a new one if he thinks
>modifications are due.
>
>I am not sure how keeping the repository open will not help, as the
>performance degradation is due to syncs of the accumulated
>transactions when the DB is closed at the end of every HTTP request.
>A checkout has the disk chugging like when an OS thrashes.
>  
>
Let's just make one thing clear here -- the fsync that happens at every
BDB transaction commit has nothing to do with how many times you open
the database. Keeping the DB open will help, yes, but it won't
significantly decrease the number of fsyncs.

-- 
Brane Čibej   <br...@xbc.nu>   http://www.xbc.nu/brane/


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: general server performance (was Re: apache svn server memory usage?)

Posted by Mukund <mu...@tessna.com>.
On Tue, Jul 01, 2003 at 10:42:43AM -0500, Ben Collins-Sussman wrote:
|    every 2 seconds!)  There's been discusion about keeping the
|    repository open for the whole TCP/IP "connection session", and
|    Sander has experimented with this, but it didn't seem to do much.
|    Still need to investigate.

Hi Sussman

I am going to try this patch when Sander looks at Greg Stein's comments
(in this thread) for his patch and releases a new one if he thinks
modifications are due.

I am not sure how keeping the repository open will not help, as the
performance degradation is due to syncs of the accumulated
transactions when the DB is closed at the end of every HTTP request.
A checkout has the disk chugging like when an OS thrashes.

Mukund


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: [PATCH] Cache open repository per connection, WAS: RE: general server performance (was Re: apache svn server memory usage?)

Posted by pl...@lanminds.com.
>>>>> On Mon, 14 Jul 2003, "Sander" == Sander Striker wrote:

  Sander> Like I said in my earlier comment: _don't_ worry about this
  Sander> patch, I will be applying it myself.

I'm completely confused now. Is what I filed as 1412 the same as what 
I e-mailed you about?

I had them marked as two different threads for some reason.  One of 
which you stated you'd commit yourself (Subject: Oops...Here is the 
patch), and the other, (Subject: Cache open repository per connection).

Of course, now that I look at the messages properly threaded in the 
archive, they do appear to be related :(

Sorry, I *suck* at patch management!

(I can't *wait* for Sander Roobol to get back from holiday!)
-- 

Seeya,
Paul



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

RE: [PATCH] Cache open repository per connection, WAS: RE: general server performance (was Re: apache svn server memory usage?)

Posted by Sander Striker <st...@apache.org>.
> From: Paul L Lussier [mailto:pll@lanminds.com]
> Sent: Monday, July 14, 2003 8:44 PM

> Filed as issue 1412:
> 
> 	http://subversion.tigris.org/issues/show_bug.cgi?id=1412

You gave me about 20 minutes to reply...  Which ofcourse I didn't
make.  The patch was already committed.  Like I said in my earlier
comment: _don't_ worry about this patch, I will be applying it myself.

Oh well,


Sander

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: [PATCH] Cache open repository per connection, WAS: RE: general server performance (was Re: apache svn server memory usage?)

Posted by Paul L Lussier <pl...@lanminds.com>.
Filed as issue 1412:

	http://subversion.tigris.org/issues/show_bug.cgi?id=1412
-- 

Seeya,
Paul



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

[PATCH] Cache open repository per connection, WAS: RE: general server performance (was Re: apache svn server memory usage?)

Posted by Sander Striker <st...@apache.org>.
> From: sussman@collab.net [mailto:sussman@collab.net]
> Sent: Tuesday, July 01, 2003 5:43 PM

> 2. As was already mentioned, because HTTP request is stateless, apache
>    opens and closes/syncs the repository (BDB environment) with
>    *every* request.  (One user had write caching turned off on his
>    server;  this caused his http checkouts to arrive about 1 file
>    every 2 seconds!)  There's been discusion about keeping the
>    repository open for the whole TCP/IP "connection session", and
>    Sander has experimented with this, but it didn't seem to do much.
>    Still need to investigate.

And here is the limited tested patch.


Sander

Log:
Cache open repository per connection.

* subversion/mod_dav_svn/repos.c

  (dav_svn_get_resource): Store open repository in connection notes
    table, keyed by location and repositoryname.  Use this open
    repository for the duration of the connection.


Index: subversion/mod_dav_svn/repos.c
===================================================================
--- subversion/mod_dav_svn/repos.c      (revision 6386)
+++ subversion/mod_dav_svn/repos.c      (working copy)
@@ -22,6 +22,7 @@
 #include <http_protocol.h>
 #include <http_log.h>
 #include <http_core.h>  /* for ap_construct_url */
+#include <util_script.h> /* for ap_find_path_info */
 #include <mod_dav.h>

 #define APR_WANT_STRFUNC
@@ -1076,6 +1077,8 @@
   const char *repos_name;
   const char *relative;
   const char *repos_path;
+  const char *base_path;
+  const char *repos_key;
   const char *version_name;
   svn_error_t *serr;
   dav_error *err;
@@ -1181,15 +1184,27 @@
   /* Remember who is making this request */
   repos->username = r->user;

-  /* open the SVN FS */
-  serr = svn_repos_open(&(repos->repos), fs_path, r->pool);
-  if (serr != NULL)
+  /* Get the repository */
+  base_path = apr_pstrndup(r->pool, r->uri,
+                           ap_find_path_info(r->uri, r->path_info));
+  repos_key = apr_pstrcat(r->pool, "mod_dav_svn:", base_path, root_path);
+  repos->repos = (void *)apr_table_get(r->connection->notes, repos_key);
+  if (repos->repos == NULL)
     {
-      return dav_svn_convert_err(serr, HTTP_INTERNAL_SERVER_ERROR,
-                                 apr_psprintf(r->pool,
-                                              "Could not open the SVN "
-                                              "filesystem at %s",
-                                              fs_path));
+      serr = svn_repos_open(&(repos->repos), fs_path, r->connection->pool);
+      if (serr != NULL)
+        {
+          return dav_svn_convert_err(serr, HTTP_INTERNAL_SERVER_ERROR,
+                                     apr_psprintf(r->pool,
+                                                  "Could not open the SVN "
+                                                  "filesystem at %s",
+                                                  fs_path));
+        }
+
+      /* Cache the open repos for the next request on this connection */
+      apr_table_setn(r->connection->notes,
+                     apr_pstrdup(r->connection->pool, repos_key),
+                     (void *)repos->repos);
     }

   /* cache the filesystem object */

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

RE: general server performance (was Re: apache svn server memory usage?)

Posted by Steven Brown <sw...@ucsd.edu>.

> -----Original Message-----
> From: sussman@collab.net [mailto:sussman@collab.net]
> Sent: Tuesday, July 01, 2003 8:43 AM
> To: Chris Hecker; SVN Dev List
> Subject: Re: general server performance (was Re: apache svn server
> memory usage?)
>
>
> Chris Hecker <ch...@d6.com> writes:
>
> > I should be clear that I'm not complaining here, I know svn is still
> > in development, premature optimization and all that.  I'm just
> > wondering if there's something I've screwed up as the server admin or
> > if this is all stuff that code changes will be necessary to fix.
>
> I have some strong opinions here, and I'll state them at the risk of
> Greg Stein coming at me with an axe.  :-)
>
> I think there are two fundamental problems regarding the "slowness" of
> ra_dav/apache, compared to, say, ra_svn/svnserve:
>
> 1. HTTP is a stateless protocol.  It's just *not* the best choice in
>    the world for something like version control, no matter how you
>    drink the kool-aid.  Even though the client keeps a single TCP/IP
>    connection open to apache, there are still a whole lot of network
>    turnarounds, and the requests/repsonses are pretty "thick" with
>    headers.
>
>    Now granted, at the moment, we've not yet optimized ra_dav nearly
>    as much as we can.  It's still sending too many requests and
>    turnarounds, waaaay more than it should.  And it will be fixed.
>    And HTTP proxy caches will speed things up as well. But deep down,
>    I still believe that HTTP will never be quite as fast as our custom
>    stateful protocol.

I'm not too familiar with the methods subversion is using in its dav layer,
but I've definately run into the performance problems with checkout.  Would
HTTP pipelining the requests be possible as a quick hack, i.e., no/minimal
dependency issues?  I'd guess that would remove almost all of the
performance issues related to the network.


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: general server performance (was Re: apache svn server memory usage?)

Posted by Ben Collins-Sussman <su...@collab.net>.
Chris Hecker <ch...@d6.com> writes:

> I should be clear that I'm not complaining here, I know svn is still
> in development, premature optimization and all that.  I'm just
> wondering if there's something I've screwed up as the server admin or
> if this is all stuff that code changes will be necessary to fix.

I have some strong opinions here, and I'll state them at the risk of
Greg Stein coming at me with an axe.  :-)

I think there are two fundamental problems regarding the "slowness" of
ra_dav/apache, compared to, say, ra_svn/svnserve:

1. HTTP is a stateless protocol.  It's just *not* the best choice in
   the world for something like version control, no matter how you
   drink the kool-aid.  Even though the client keeps a single TCP/IP
   connection open to apache, there are still a whole lot of network
   turnarounds, and the requests/repsonses are pretty "thick" with
   headers.

   Now granted, at the moment, we've not yet optimized ra_dav nearly
   as much as we can.  It's still sending too many requests and
   turnarounds, waaaay more than it should.  And it will be fixed.
   And HTTP proxy caches will speed things up as well. But deep down,
   I still believe that HTTP will never be quite as fast as our custom
   stateful protocol.

2. As was already mentioned, because HTTP request is stateless, apache
   opens and closes/syncs the repository (BDB environment) with
   *every* request.  (One user had write caching turned off on his
   server;  this caused his http checkouts to arrive about 1 file
   every 2 seconds!)  There's been discusion about keeping the
   repository open for the whole TCP/IP "connection session", and
   Sander has experimented with this, but it didn't seem to do much.
   Still need to investigate.

At the moment, there's still a tradeoff decision to be made.  If you
use apache, you'll get slower performance than svnserve, but you get a
zillion other great features in return (no unix accounts required,
almost any sort of authentication, path-based authorization, some
degree of webDAV interoperability, etc.)  I think it's worth the trade
for most people.



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

general server performance (was Re: apache svn server memory usage?)

Posted by Chris Hecker <ch...@d6.com>.
>Caching doesn't help you when you have to fsync the database log files
>at every transaction commit.

Right, but even checkouts seem pokey...are they considered transactions as 
far as disk syncing as well (I assume not)?  Also, is there any way to 
trade risk for performance, and have it not sync to disk as often, or 
schedule it for the background, etc.?  But even ignoring that, what 
explains why the net throughput is so low?

Some quick empirical data (totally unscientific) on a fresh checkout:

4m3s "time svn co https://blah dir"
201 files (excluding all .svn/ files) (6kb median size, 63k average size 
uncompressed)
12.8mb non-.svn uncompressed file size sum
145kb sent/1.5mb received total net traffic during co
tar.gz of all non-.svn files 1.3mb (so the server->client compression is 
working well)
6kb median size, 63k average size uncompressed

Taking the 1.5mb / 4m3s gives only 6.4kbps.  HTTP downloads over the same 
SSL connection to this server using wget acheive a steady 110kbps, so svn's 
utilization is not great.

I should be clear that I'm not complaining here, I know svn is still in 
development, premature optimization and all that.  I'm just wondering if 
there's something I've screwed up as the server admin or if this is all 
stuff that code changes will be necessary to fix.

Thanks,
Chris

         


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: How should permissions be set in a shared workspace environment?

Posted by Ryan Schmidt <su...@ryandesign.com>.
On Apr 11, 2007, at 19:03, Robert Wenner wrote:

>> We also managed to fill up the hard drive on the server pretty
>> quickly with 2.5 gig a pop workspaces.
>
> I do not understand what you mean here.
> Surely your server must have more than 2.5 GB hard disk capacity?

It sounds like every user has a working copy on the server.

The answer to that is either install bigger hard drives in the  
server, or have people check out working copies on their local client  
machines instead of on the server. Working locally would surely  
improve performance as well, but there may be other reasons why the  
working copy has to be on the server.


-- 

To reply to the mailing list, please use your mailer's Reply To All  
function


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: How should permissions be set in a shared workspace environment?

Posted by Robert Wenner <ro...@port25.com>.
James Oltmans wrote:
> We have tried giving them their own copies. They did not like the 20-30
> mins rebuild turnaround time (we do not have a quick-build option, we
> are working with UniData, not C++ or Java) and the fact that any time
> they wanted to see a project team-member's contribution they needed to
> rebuild.

If I get you right, your problem are build times.
Can you check in the compiled files to avoid the long builds?
Then everybody can have their own working copy.

> We also managed to fill up the hard drive on the server pretty
> quickly with 2.5 gig a pop workspaces. 

I do not understand what you mean here.
Surely your server must have more than 2.5 GB hard disk capacity?

Confused,

Robert

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: svn diff on renamed files

Posted by Chris Hecker <ch...@d6.com>.
>(Issue 1093 isn't about that, though, it's about guessing the URL from
>just the working copy path, and Philip's comments there explain why
>this practise is at least questionable.)

1093 seems to be about diffing around a rename (like my original bug), and 
the comments thread drifts a bit, or am I misreading?

Chris


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: svn diff on renamed files

Posted by Chris Hecker <ch...@d6.com>.
>I was testing a rename.  It appears we can get diffs for a rename like
>this
>   svn mv old/foo new/foo
>but not for a rename like this
>   svn mv foo/old foo/new

Ah, sorry, I missed your directory change in your post (I just saw the 
identical filename).  Anyway, glad you repro'd it.

Chris



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: svn diff on renamed files

Posted by Philip Martin <ph...@codematters.co.uk>.
Chris Hecker <ch...@d6.com> writes:

>>Hmm, this works as I expect
>>as does this
>>and this
>
> Yes, but you weren't testing across renames, which was the point of
> the thread.

I was testing a rename.  It appears we can get diffs for a rename like
this

  svn mv old/foo new/foo

but not for a rename like this

  svn mv foo/old foo/new

-- 
Philip Martin

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: svn diff on renamed files

Posted by Chris Hecker <ch...@d6.com>.
>Hmm, this works as I expect
>as does this
>and this

Yes, but you weren't testing across renames, which was the point of the 
thread.  It's asking for versions of the new file at the old version, as 
you can see in this log.  Let me know if you can't repro it:

c:\checker\dev\icfp\ignoreme>cat > old.txt
this is old

c:\checker\dev\icfp\ignoreme>svn add old.txt
A         old.txt

c:\checker\dev\icfp\ignoreme>svn ci -m "ignore me"
Adding         ignoreme\old.txt
Transmitting file data .
Committed revision 108.

c:\checker\dev\icfp\ignoreme>cat > old.txt
slightly newer

c:\checker\dev\icfp\ignoreme>svn ci -m "ignore me"
Sending        ignoreme\old.txt
Transmitting file data .
Committed revision 109.

c:\checker\dev\icfp\ignoreme>cat > old.txt
even newer

c:\checker\dev\icfp\ignoreme>svn ci -m "ignore me"
Sending        ignoreme\old.txt
Transmitting file data .
Committed revision 110.

c:\checker\dev\icfp\ignoreme>svn ren old.txt new.txt
A         new.txt
D         old.txt

c:\checker\dev\icfp\ignoreme>svn ci -m "ignore me"
Adding         ignoreme\new.txt
Deleting       ignoreme\old.txt

Committed revision 111.

c:\checker\dev\icfp\ignoreme>cat > new.txt
the new stuff

c:\checker\dev\icfp\ignoreme>svn ci -m "ignore me"
Sending        ignoreme\new.txt
Transmitting file data .
Committed revision 112.

c:\checker\dev\icfp\ignoreme>svn diff --old 
https://blah/svn/icfp/2003/trunk/ignoreme/old.txt@109 --new 
https://blah/svn/icfp/2003/trunk/ignoreme/new.txt@HEAD
svn: RA layer request failed
svn: PROPFIND request failed on 
'/svn/icfp/!svn/bc/109/2003/trunk/ignoreme/new.txt'
svn: PROPFIND of '/svn/icfp/!svn/bc/109/2003/trunk/ignoreme/new.txt': 404 
Not Found (https://blah)

c:\checker\dev\icfp\ignoreme>svn diff -r109:HEAD --old 
https://blah/svn/icfp/2003/trunk/ignoreme/old.txt --new 
https://blah/svn/icfp/2003/trunk/ignoreme/new.txt
svn: RA layer request failed
svn: PROPFIND request failed on 
'/svn/icfp/!svn/bc/109/2003/trunk/ignoreme/new.txt'
svn: PROPFIND of '/svn/icfp/!svn/bc/109/2003/trunk/ignoreme/new.txt': 404 
Not Found (https://blah)



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: svn diff on renamed files

Posted by Philip Martin <ph...@codematters.co.uk>.
kfogel@collab.net writes:

> Chris Hecker <ch...@d6.com> writes:
>> Hmm, when I wrote my original mail I figured there was actually some
>> way to do the svn diff, it was just inconvenient.  Now it's seems that
>> you just can't do the diff at all, even with full paths to the
>> repository:
>> 
>> svn diff --old url/old.txt@4 --new url/new.txt@HEAD
>> 
>> doesn't work (or any of the other combos I tried...it keeps looking
>> for new.txt@4 for some reason).  Am I missing something?  Bug 1093 is
>> marked as post-1.0, which seems like a bad idea since this really
>> limits the utility of being able to rename files in the first place,
>> which is a big selling point of svn relative to cvs.  It seems like
>> the --old --new args are just broken to diff?
>> 
>> Or am I missing the magic syntax (in addition to a clue)?
>
> I think your syntax is right, judging from 'svn help diff' -- Philip
> Martin will correct me if wrong.  I think it's a known bug that not
> everything promised by the syntax is actually delivered yet.

Hmm, this works as I expect

svn diff -r4149:HEAD \
 --old http://svn.collab.net/repos/svn/trunk/tools/dev/ \
 --new http://svn.collab.net/repos/svn/trunk/tools/client-side/ \
 bash_completion

as does this

svn diff -r4149:HEAD \
 --old http://svn.collab.net/repos/svn/trunk/tools/dev/bash_completion \
 --new http://svn.collab.net/repos/svn/trunk/tools/client-side/bash_completion

and this

svn diff \
 --old \
 http://svn.collab.net/repos/svn/trunk/tools/dev/bash_completion@4149 \
 --new \
 http://svn.collab.net/repos/svn/trunk/tools/client-side/bash_completion@HEAD

-- 
Philip Martin

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: svn diff on renamed files

Posted by kf...@collab.net.
Chris Hecker <ch...@d6.com> writes:
> Hmm, when I wrote my original mail I figured there was actually some
> way to do the svn diff, it was just inconvenient.  Now it's seems that
> you just can't do the diff at all, even with full paths to the
> repository:
> 
> svn diff --old url/old.txt@4 --new url/new.txt@HEAD
> 
> doesn't work (or any of the other combos I tried...it keeps looking
> for new.txt@4 for some reason).  Am I missing something?  Bug 1093 is
> marked as post-1.0, which seems like a bad idea since this really
> limits the utility of being able to rename files in the first place,
> which is a big selling point of svn relative to cvs.  It seems like
> the --old --new args are just broken to diff?
> 
> Or am I missing the magic syntax (in addition to a clue)?

I think your syntax is right, judging from 'svn help diff' -- Philip
Martin will correct me if wrong.  I think it's a known bug that not
everything promised by the syntax is actually delivered yet.

(Issue 1093 isn't about that, though, it's about guessing the URL from
just the working copy path, and Philip's comments there explain why
this practise is at least questionable.)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: svn diff on renamed files

Posted by Chris Hecker <ch...@d6.com>.
Hmm, when I wrote my original mail I figured there was actually some way to 
do the svn diff, it was just inconvenient.  Now it's seems that you just 
can't do the diff at all, even with full paths to the repository:

svn diff --old url/old.txt@4 --new url/new.txt@HEAD

doesn't work (or any of the other combos I tried...it keeps looking for 
new.txt@4 for some reason).  Am I missing something?  Bug 1093 is marked as 
post-1.0, which seems like a bad idea since this really limits the utility 
of being able to rename files in the first place, which is a big selling 
point of svn relative to cvs.  It seems like the --old --new args are just 
broken to diff?

Or am I missing the magic syntax (in addition to a clue)?

Chris


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: svn diff on renamed files

Posted by Chris Hecker <ch...@d6.com>.
>Yup.  This appears to be the same as newly-filed issue #1375; I had
>thought there was an older issue about essentially the same thing, but
>now I can't seem to find it.  (Maybe someone else remembers which one?)

That one is the 1093 pseudo-dupe.  Is there another issue related?

> > > 2.  Making a symbol for URL-to-the-current-wc-directory (like the
> > > symbols for HEAD, PREV, etc.), so I can just say REP_URL/foo.txt (or
> > > whatever) to specify the full path.  This would just get the Url:
> > > from info and use it, nothing fancy, just a shorthand.  <snip>
>If (1) is fixed, what use did you have in mind for (2)?

Obviously getting 1 fixes this particular case for 2, but this one seems 
dead simple and I'm assuming there are other reasons to want to access the 
current directory in the repository.  Same answer for this...

> > > 3.  If there's an @ symbol on a non-full-url'd file name, still look
> > > for it at that revision number in the repository.  In other words,
> > > old.txt is not in my wc, but old.txt@3 makes total sense
> > > contextually, even though there's no old.txt in my wc.  This would
> > > save a lot of typing.
>Or: if we have (2), is (3) really necessary :-) ?

I think 3 is the "better" feature than 2, so picking one I'd pick 3 (say 
that sentence 5 times fast :).  They're both just handy UI shortcuts, and 
accomplish the same thing, but not vital.

As we both say, getting 1 would alleviate the need for 2 and 3 for this 
specific problem, but I've been using svn for all of 2 days, so I don't 
have a good feel for other situations in which they'd be desirable and how 
often you need to type the full url.

Chris



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

RE: Externals using absolute path on Windows

Posted by David Hickman <da...@audleytravel.com>.
-----Original Message-----

The issue was fixed by making Windows behave like Unix and so give an "invalid property" error rather than an assert.  Adding support for absolute paths is not currently planned due to the security issue.

--
Philip


>>>>>>>>

Ah ok, it was my interpretation of the bug report that was at fault here I think!  Thanks very much for your help Philip.


Best Regards,
David


Re: Apache::Session::MySQL, light/heavy proxy, wedging

Posted by Perrin Harkins <ph...@gmail.com>.
On 1/31/07, Todd Finney <tf...@boygenius.com> wrote:
> It's responsible for making sure that the client has a session, and it
> takes any of the values in the session and stores them in pnotes.

Are you sure that you had removed all of these when you did the test
of copying the session_id, and it still had the same problem?

- Perrin

Re: Apache::Session::MySQL, light/heavy proxy, wedging

Posted by Todd Finney <tf...@boygenius.com>.
At 08:29 AM 1/31/2007 -0500, Perrin Harkins wrote:
>On 1/31/07, Todd Finney <tf...@boygenius.com> wrote:
>>Wouldn't throwing a
>>         return DECLINED unless $r->is_initial_req;
>>at the top of the handler fix the problem, in that case?
>
>Probably, if you don't actually need this handler to run for the final
>URI.  What's the purpose of the handler?

It's responsible for making sure that the client has a session, and it 
takes any of the values in the session and stores them in pnotes.

There's an authentication handler further down the line that uses some of 
those values (if they exist) to determine whether or not the user has 
permission to access the resource.  That returns OK unless 
is_initial_request, though, so I don't think that a change made in 
Session.pm will affect that.

I've gone ahead and added that line, we'll see how it works out.

Thanks for your help, Perrin and Jonathan, I really appreciate it.


Re: task : from which directory is the java command started

Posted by "Clifton C. Craig" <cc...@icsaward.com>.
Rudolf,

I just read a few mails back and saw your earlier question. I believe 
you want the following:

<target name="hsqldb" description="Start the HSQLB sample database">
   <java classname="org.hsqldb.Server" fork="true" dir="${db.dir}" 
failonerror="true"
maxmemory="128m" >
      <arg value="-database testxyz"/>
      <classpath><pathelement 
location="${lib.home}/hsqldb.jar"/></classpath>
    </java>
</target>

Where ${db.dir} is a property you can set either interactively via input 
task or via another Ant task like <property>. You can query the working 
directory via the system property user.dir. (eg. 
System.getProperty("user.dir");) Use this to provide basic error 
trapping within your Server class.

Cliff

Rudolf Nottrott wrote:

> Thanks Antoine for the suggestion, but it doesn't seem to make a 
> difference.
>
> Basically, I need to make sure that the command
>
> 'java org.hsqldb.Server -database testxyz'
>
> which is run by the ant target below, is started from the current 
> directory, the directory in which the database files are.
>
> How can I control the directory in which a  <java task gets started?  
> Is there a parameter for that?
>
> Thanks,
> Rudolf
>
>
> At 08:53 AM 1/20/2004 +0100, you wrote:
>
>> You probably want 2 arguments :
>> <arg value="-database"/>
>> <arg value="testxyz"/>
>>
>> Antoine
>
>
>> Rudolf Nottrott wrote:
>>
>>> Hi,
>>>
>>> I have an Ant <java...> task that runs a database program named 
>>> org.hsqldb.Server, see below.  The argument  to the Server is a 
>>> database name, "-database testxyz".  The database testxyz.* is 
>>> supposed to be taken from (or created in) the directory you were in 
>>> when you issued the Java command that started the database Server.
>>> Now, I didn't start Java -- I started Ant which started Java.
>>>
>>> Here is the task:
>>> <target name="hsqldb" description="Start the HSQLB sample database">
>>>    <java classname="org.hsqldb.Server" fork="true" failonerror="true"
>>> maxmemory="128m" >
>>>       <arg value="-database testxyz"/>
>>>       <classpath><pathelement 
>>> location="${lib.home}/hsqldb.jar"/></classpath>
>>>     </java>
>>> </target>
>>>
>>> The server starts up ok, but I'm not getting the database I want, 
>>> testxyz, and so I'm trying to verify  the directory from which the 
>>> Java command of task <java ...> was issued.
>>> Any ideas on how trace this?  Is there perhaps some task like "print 
>>> working directory" that I could run in conjunction with the <java 
>>> ... > task?
>>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@ant.apache.org
> For additional commands, e-mail: user-help@ant.apache.org
>
>
>

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@ant.apache.org
For additional commands, e-mail: user-help@ant.apache.org


Re: mason book

Posted by Per Einar Ellefsen <pe...@oslo.online.no>.
At 16:56 09.11.2002, Stas Bekman wrote:
>>>I suggest that we always list the strictly mod_perl books and rotate the 
>>>other mod_perl related books. So if they get randomly placed on 
>>>different pages and reshuffled once a week that should be fair.
>>
>>Problem with listing all strictly mod_perl books is that there are coming 
>>more of them. With your book coming, we'll have a list of 4 books, + one 
>>in rotation, and that's way too much for a sidebar IMO. I don't want to 
>>see more than 3 books there, so there needs to be some kind of rotation. 
>>Maybe we list the newest mod_perl book on the top, and rotate the two 
>>other places with the rest?
>
>What I have on my mind is something like this:
>
>- Always have 3 ads,
>   2 first dedicated for the mod_perl specific books
>   1 for mod_perl related books
>
>That way we always give priority to the mod_perl specific books. Of course 
>both groups to use a random alg. to make the choice. So on different pages 
>there will be different books.

Sure, good idea. I'll do it soon.


-- 
Per Einar Ellefsen
pereinar@oslo.online.no



---------------------------------------------------------------------
To unsubscribe, e-mail: docs-dev-unsubscribe@perl.apache.org
For additional commands, e-mail: docs-dev-help@perl.apache.org


Re: mason book

Posted by Stas Bekman <st...@stason.org>.
Per Einar Ellefsen wrote:
> Welcome back Stas,

Thanks :)

>>>> I've rewritten the templates to be more modular and keep a structure 
>>>> of books that we have ads for, and change the order each day (the 
>>>> site is rebuilt in its entirey each day, so that shouldn't be a 
>>>> problem).
>>>
>>
>> But, it's not. It'd be a waste to rebuild it everyday. Currently it 
>> gets rebuild every monday. See README.SITE:
> 
> 
> Thank you for that one, I was sure I wasn't seeing the correct rotation! 
> I confounded the daily one with a full (no PDF) rebuild. My mistake.

No prob.

>>> Right now
>>>
>>>> it's like it was before, because we're thursday (and for 
>>>> Date::Format, thursday == 4 mod 4 == 0, so the first book before is 
>>>> the first book now). But when you'll see this it might be friday 
>>>> already so you'll see the Mason book :)
>>>
>>>
>>> Great. It's good to see all these books that are related to mod_perl!
>>
>>
>> In that case we should add several other books. My memory is still 
>> full of mountain sites, so only the Slash book comes to mind. But 
>> there are at least several more.
>>
>> I suggest that we always list the strictly mod_perl books and rotate 
>> the other mod_perl related books. So if they get randomly placed on 
>> different pages and reshuffled once a week that should be fair.
> 
> 
> Problem with listing all strictly mod_perl books is that there are 
> coming more of them. With your book coming, we'll have a list of 4 
> books, + one in rotation, and that's way too much for a sidebar IMO. I 
> don't want to see more than 3 books there, so there needs to be some 
> kind of rotation. Maybe we list the newest mod_perl book on the top, and 
> rotate the two other places with the rest?

What I have on my mind is something like this:

- Always have 3 ads,
   2 first dedicated for the mod_perl specific books
   1 for mod_perl related books

That way we always give priority to the mod_perl specific books. Of 
course both groups to use a random alg. to make the choice. So on 
different pages there will be different books.

__________________________________________________________________
Stas Bekman            JAm_pH ------> Just Another mod_perl Hacker
http://stason.org/     mod_perl Guide ---> http://perl.apache.org
mailto:stas@stason.org http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org   http://ticketmaster.com


---------------------------------------------------------------------
To unsubscribe, e-mail: docs-dev-unsubscribe@perl.apache.org
For additional commands, e-mail: docs-dev-help@perl.apache.org


Re: mason book

Posted by Per Einar Ellefsen <pe...@oslo.online.no>.
Welcome back Stas,

At 05:25 08.11.2002, Stas Bekman wrote:
>Thomas Eibner wrote:
>>On Thu, Oct 24, 2002 at 12:58:58AM +0200, Per Einar Ellefsen wrote:
>>
>>>I've rewritten the templates to be more modular and keep a structure of 
>>>books that we have ads for, and change the order each day (the site is 
>>>rebuilt in its entirey each day, so that shouldn't be a problem).
>
>But, it's not. It'd be a waste to rebuild it everyday. Currently it gets 
>rebuild every monday. See README.SITE:

Thank you for that one, I was sure I wasn't seeing the correct rotation! I 
confounded the daily one with a full (no PDF) rebuild. My mistake.

>>Right now
>>>it's like it was before, because we're thursday (and for Date::Format, 
>>>thursday == 4 mod 4 == 0, so the first book before is the first book 
>>>now). But when you'll see this it might be friday already so you'll see 
>>>the Mason book :)
>>
>>Great. It's good to see all these books that are related to mod_perl!
>
>In that case we should add several other books. My memory is still full of 
>mountain sites, so only the Slash book comes to mind. But there are at 
>least several more.
>
>I suggest that we always list the strictly mod_perl books and rotate the 
>other mod_perl related books. So if they get randomly placed on different 
>pages and reshuffled once a week that should be fair.

Problem with listing all strictly mod_perl books is that there are coming 
more of them. With your book coming, we'll have a list of 4 books, + one in 
rotation, and that's way too much for a sidebar IMO. I don't want to see 
more than 3 books there, so there needs to be some kind of rotation. Maybe 
we list the newest mod_perl book on the top, and rotate the two other 
places with the rest?


-- 
Per Einar Ellefsen
pereinar@oslo.online.no



---------------------------------------------------------------------
To unsubscribe, e-mail: docs-dev-unsubscribe@perl.apache.org
For additional commands, e-mail: docs-dev-help@perl.apache.org


Re: mason book

Posted by Stas Bekman <st...@stason.org>.
Thomas Eibner wrote:
> On Thu, Oct 24, 2002 at 12:58:58AM +0200, Per Einar Ellefsen wrote:
> 
>>I've rewritten the templates to be more modular and keep a structure of 
>>books that we have ads for, and change the order each day (the site is 
>>rebuilt in its entirey each day, so that shouldn't be a problem).

But, it's not. It'd be a waste to rebuild it everyday. Currently it gets 
rebuild every monday. See README.SITE:

# every monday rebuild all, including pdf
30 05  * * 1 
/home/perlwww/apache.org/modperl-docs/bin/site_build_force_pdf_inde
x
# update all (only changes/no pdf) every 6 hours
15 6,12,18 * * * /home/perlwww/apache.org/modperl-docs/bin/site_build_index
# update all (only changes and pdfs) once a day
15 0 * * * /home/perlwww/apache.org/modperl-docs/bin/site_build_pdf_index

> Right now 
>>it's like it was before, because we're thursday (and for Date::Format, 
>>thursday == 4 mod 4 == 0, so the first book before is the first book now). 
>>But when you'll see this it might be friday already so you'll see the Mason 
>>book :)
> 
> 
> Great. It's good to see all these books that are related to mod_perl!

In that case we should add several other books. My memory is still full 
of mountain sites, so only the Slash book comes to mind. But there are 
at least several more.

I suggest that we always list the strictly mod_perl books and rotate the 
other mod_perl related books. So if they get randomly placed on 
different pages and reshuffled once a week that should be fair.


__________________________________________________________________
Stas Bekman            JAm_pH ------> Just Another mod_perl Hacker
http://stason.org/     mod_perl Guide ---> http://perl.apache.org
mailto:stas@stason.org http://use.perl.org http://apacheweek.com
http://modperlbook.org http://apache.org   http://ticketmaster.com


---------------------------------------------------------------------
To unsubscribe, e-mail: docs-dev-unsubscribe@perl.apache.org
For additional commands, e-mail: docs-dev-help@perl.apache.org


Re: mason book

Posted by Thomas Eibner <th...@stderr.net>.
On Thu, Oct 24, 2002 at 12:58:58AM +0200, Per Einar Ellefsen wrote:
> I've rewritten the templates to be more modular and keep a structure of 
> books that we have ads for, and change the order each day (the site is 
> rebuilt in its entirey each day, so that shouldn't be a problem). Right now 
> it's like it was before, because we're thursday (and for Date::Format, 
> thursday == 4 mod 4 == 0, so the first book before is the first book now). 
> But when you'll see this it might be friday already so you'll see the Mason 
> book :)

Great. It's good to see all these books that are related to mod_perl!

-- 
  Thomas Eibner <http://thomas.eibner.dk/> DnsZone <http://dnszone.org/>
  mod_pointer <http://stderr.net/mod_pointer> <http://photos.eibner.dk/>
  !(C)<http://copywrong.dk/>                  <http://apachegallery.dk/>
          Putting the HEST in .COM <http://www.hestdesign.com/>

---------------------------------------------------------------------
To unsubscribe, e-mail: docs-dev-unsubscribe@perl.apache.org
For additional commands, e-mail: docs-dev-help@perl.apache.org


Re: mason book

Posted by Per Einar Ellefsen <pe...@oslo.online.no>.
[Sorry for my last e-mail which "accidentally" went off before me writing 
anything :)]

At 00:02 25.10.2002, Per Einar Ellefsen wrote:
>At 22:21 24.10.2002, Thomas Eibner wrote:
>>On Thu, Oct 24, 2002 at 10:13:58PM +0200, Per Einar Ellefsen wrote:
>> > At 22:03 24.10.2002, allan wrote:
>> > >why not the eagle book [ it's the last ] ?
>> >
>> > Yeah, but also the most important one. But it's probably more fair to have
>> > the last one removed.
>>
>>I think the Eagle is the most important one too, but how about some
>>kind of cycling mechanism so all books get exposure?
>
>Yes, good idea. I'll re-work the templates a little now and try to get 
>that one working nicely.

I've rewritten the templates to be more modular and keep a structure of 
books that we have ads for, and change the order each day (the site is 
rebuilt in its entirey each day, so that shouldn't be a problem). Right now 
it's like it was before, because we're thursday (and for Date::Format, 
thursday == 4 mod 4 == 0, so the first book before is the first book now). 
But when you'll see this it might be friday already so you'll see the Mason 
book :)

The algorithm could be adapted to another method of picking the first book, 
by random for example, but I don't think there's a need for that.


-- 
Per Einar Ellefsen
pereinar@oslo.online.no



---------------------------------------------------------------------
To unsubscribe, e-mail: docs-dev-unsubscribe@perl.apache.org
For additional commands, e-mail: docs-dev-help@perl.apache.org


Re: mason book

Posted by Per Einar Ellefsen <pe...@oslo.online.no>.
At 00:02 25.10.2002, Per Einar Ellefsen wrote:
>At 22:21 24.10.2002, Thomas Eibner wrote:
>>On Thu, Oct 24, 2002 at 10:13:58PM +0200, Per Einar Ellefsen wrote:
>> > At 22:03 24.10.2002, allan wrote:
>> > >why not the eagle book [ it's the last ] ?
>> >
>> > Yeah, but also the most important one. But it's probably more fair to have
>> > the last one removed.
>>
>>I think the Eagle is the most important one too, but how about some
>>kind of cycling mechanism so all books get exposure?
>
>Yes, good idea. I'll re-work the templates a little now and try to get 
>that one working nicely.
>
>
>--
>Per Einar Ellefsen
>pereinar@oslo.online.no
>
>
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: docs-dev-unsubscribe@perl.apache.org
>For additional commands, e-mail: docs-dev-help@perl.apache.org
>

-- 
Per Einar Ellefsen
pereinar@oslo.online.no



---------------------------------------------------------------------
To unsubscribe, e-mail: docs-dev-unsubscribe@perl.apache.org
For additional commands, e-mail: docs-dev-help@perl.apache.org


Re: mason book

Posted by Per Einar Ellefsen <pe...@oslo.online.no>.
At 22:21 24.10.2002, Thomas Eibner wrote:
>On Thu, Oct 24, 2002 at 10:13:58PM +0200, Per Einar Ellefsen wrote:
> > At 22:03 24.10.2002, allan wrote:
> > >why not the eagle book [ it's the last ] ?
> >
> > Yeah, but also the most important one. But it's probably more fair to have
> > the last one removed.
>
>I think the Eagle is the most important one too, but how about some
>kind of cycling mechanism so all books get exposure?

Yes, good idea. I'll re-work the templates a little now and try to get that 
one working nicely.


-- 
Per Einar Ellefsen
pereinar@oslo.online.no



---------------------------------------------------------------------
To unsubscribe, e-mail: docs-dev-unsubscribe@perl.apache.org
For additional commands, e-mail: docs-dev-help@perl.apache.org


Re: how to pass a Map to a method

Posted by Randal Walser <ra...@comcast.net>.
At 04:11 PM 1/14/2005 -0800, you wrote:
>There's some significant grammar changes coming, stay tuned.  Most notably a 
>patch to allow decimal numbers.  Put this into a bugzilla entry and I'll 
>look into it.

I'll do that.  Thanks.  I'm looking forward to your patch.

Randal


---------------------------------------------------------------------
To unsubscribe, e-mail: velocity-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: velocity-user-help@jakarta.apache.org


Re: how to pass a Map to a method

Posted by Will Glass-Husain <wg...@forio.com>.
There's some significant grammar changes coming, stay tuned.  Most notably a 
patch to allow decimal numbers.  Put this into a bugzilla entry and I'll 
look into it.

Thanks,

WILL

----- Original Message ----- 
From: "Randal Walser" <ra...@comcast.net>
To: "Velocity Users List" <ve...@jakarta.apache.org>
Sent: Friday, January 14, 2005 3:50 PM
Subject: Re: how to pass a Map to a method


> At 10:46 PM 1/14/2005 +0900, you wrote:
>>I'd say it should.  Maps are features of the upcoming 1.5, which is
>>still under development.  Maybe it's a bug, maybe it's simply not
>>implemented yet.  You could file a bugzilla issue so the developers
>>get reminded.  Better yet, you can submit a patch to give the
>>suggested behaviour!  ;)
>
> There have been no changes to the grammar in over a year (anywhere in
> the runtime/parser directory, anyway).  Apparently, shoring up the
> language implementation hasn't been a priority for a while, unless the
> developers aren't committing language changes to the repository.  I've
> noticed some other "curiosities" in the language, as well, so maybe
> I'll dig deeper into it myself when I get a chance.  I'll submit a
> patch if I come up with anything useful.
>
> Thanks,
>
> Randal
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: velocity-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: velocity-user-help@jakarta.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: velocity-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: velocity-user-help@jakarta.apache.org


Re: Re: Cannot load JDBC driver class

Posted by Shyly Amarasinghe <am...@dpw.com>.
You're beautiful!

I already had the DBCP jar files in place, but following your suggestions, I used the xml for oracle rather than mysql and moved everything out of the server.xml's global JNDI into context, and it worked! (Got Connection org.apache.commons.dbcp.PoolableConnection@4rfg345).  I guess this means I can make pooled connections within a webapp, which is much more than I could do before.

Thank you so much for all of your help - I could not have done this without you.
Shyly

At 10:07 PM 4/17/2003 -0400, you wrote:

>> Um, is TOMCAT supposed to be an environment variable?
>
>No, not for Tomcat anyway.
>
>It sounds like you have the usual problem stuff right, so it's got to be in
>the setup in the DataSource end of it.
>
>I see a couple of things here.  First off you are not using any connection
>pooling, does the Sybase JDBC driver pool internally?
>
>If not, I'd suggest using the setup in the JNDI DataSource HowTo with DBCP.
>It just involves dropping the jar files listed in the HowTo into common/lib.
>Your performance will be much better.
>
>I wouldn't use Administrator yet to do this type of setup, it is still not
>100%.  Set up your server.xml file manually.
>
>Also, is the Sybase driver type 4? You may need a type 4 driver to connect
>directly as a DataSource, but DBCP will use earlier drivers and provide the
>necessary interfaces in its DataSource Factory.
>
>Also, the examples for MySQL with DBCP do not use the Global JNDI area and
>they worked for me first time even though I use a different database.  I
>just changed the driver name and URL.
>
>If you prefer to use the Sybase driver unpooled, try moving all your stuff
>into the context and out of the Global JNDI area in server.xml.
>
>Also try changing your Java code (I don't remember which form you used) from
>this:
>
>// Obtain our environment naming context
>Context initCtx = new InitialContext();
>Context envCtx = (Context) initCtx.lookup("java:comp/env");
>
>// Look up our data source
>DataSource ds = (DataSource)
>  envCtx.lookup("jdbc/EmployeeDB");
>
>// Allocate and use a connection from the pool
>Connection conn = ds.getConnection();
>.... use this connection to access the database ...
>conn.close();
>To this:
>
>
>try{
>      Context ctx = new InitialContext();
>      if(ctx == null )
>          throw new Exception("Boom - No Context");
>
>      DataSource ds =
>            (DataSource)ctx.lookup("java:comp/env/jdbc/TestDB");
>
>      if (ds != null) {
>        Connection conn = ds.getConnection();
>
>
>Hope this helps!
>
>Rick
>
>>
>> Everything else (sevlet, taglib) seems to work fine.  fyi, I set up the
>datasource using the tomcat administrator under Resources -> Datasources.
>(I also tried editing server.xml manually to create it, but got an error
>there as well.)  Also worth noting, when I go through the administrator to
>Tomcat server -> Server -> Host -> Context (/para) -> Resources ->
>Datasources, I get an error message "org.apache.jasper.JasperException:
>Exception retrieving attribute 'driverClassName'" which is confusing since
>that attribute is defined in server.xml.  This error message isn't in any of
>the other webapps.  If I take the <resource-ref> code out of web.xml, that
>error message goes away too.
>>
>> You're being very patient and helpful - thank you very much!
>>
>> Here is webapps\para\web-inf\web.xml
>> <?xml version="1.0" encoding="ISO-8859-1"?>
>> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application
>2.3//EN"
>>     "http://java.sun.com/dtd/web-app_2_3.dtd">
>> <web-app>
>>      <display-name>paralegal</display-name>
>>      <description>Paralegal status</description>
>>      <servlet>
>>           <servlet-name>
>>                LawSchools
>>           </servlet-name>
>>           <servlet-class>
>>                LawSchools
>>           </servlet-class>
>>      </servlet>
>>     <servlet-mapping>
>>         <servlet-name>
>>             LawSchools
>>         </servlet-name>
>>         <url-pattern>
>>             /LawSchools
>>         </url-pattern>
>>     </servlet-mapping>
>>      <taglib>
>>
><taglib-uri>http://jakarta.apache.org/taglibs/application-1.0</taglib-uri>
>>           <taglib-location>/WEB-INF/c.tld</taglib-location>
>>      </taglib>
>>   <resource-ref>
>>       <description>My DB Connection</description>
>>       <res-ref-name>jdbc/mydb</res-ref-name>
>>       <res-type>javax.sql.DataSource</res-type>
>>       <res-auth>Container</res-auth>
>>   </resource-ref>
>> </web-app>
>> ************************
>> And this is server.xml
>>
>> <?xml version='1.0' encoding='utf-8'?>
>> <Server className="org.apache.catalina.core.StandardServer" debug="0"
>port="8005" shutdown="SHUTDOWN">
>>   <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener"
>debug="0" jsr77Names="false"/>
>>   <Listener
>className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"
>debug="0"/>
>>   <GlobalNamingResources>
>>     <Environment name="simpleValue" override="true"
>type="java.lang.Integer" value="30"/>
>>     <Resource auth="Container" description="User database that can be
>updated and saved" name="UserDatabase" scope="Shareable"
>type="org.apache.catalina.UserDatabase"/>
>>     <Resource name="jdbc/profsysbackup" scope="Shareable"
>type="javax.sql.DataSource"/>
>>     <ResourceParams name="UserDatabase">
>>       <parameter>
>>         <name>factory</name>
>>         <value>org.apache.catalina.users.MemoryUserDatabaseFactory</value>
>>       </parameter>
>>       <parameter>
>>         <name>pathname</name>
>>         <value>conf/tomcat-users.xml</value>
>>       </parameter>
>>     </ResourceParams>
>>     <ResourceParams name="jdbc/mydb">
>>       <parameter>
>>         <name>maxWait</name>
>>         <value>5000</value>
>>       </parameter>
>>       <parameter>
>>         <name>maxActive</name>
>>         <value>2</value>
>>       </parameter>
>>       <parameter>
>>         <name>password</name>
>>         <value>xxx</value>
>>       </parameter>
>>       <parameter>
>>         <name>url</name>
>>         <value>jdbc:sybase:Tds:xxx:5000</value>
>>       </parameter>
>>       <parameter>
>>         <name>driverClassName</name>
>>         <value>com.sybase.jdbc2.jdbc.SybDriver</value>
>>       </parameter>
>>       <parameter>
>>         <name>maxIdle</name>
>>         <value>2</value>
>>       </parameter>
>>       <parameter>
>>         <name>username</name>
>>         <value>xxx</value>
>>       </parameter>
>>     </ResourceParams>
>>   </GlobalNamingResources>
>>   <Service className="org.apache.catalina.core.StandardService" debug="0"
>name="Tomcat-Standalone">
>>     <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
>acceptCount="100" bufferSize="2048" compression="off" connectionLinger="-1"
>connectionTimeout="20000" debug="0" disableUploadTimeout="true"
>enableLookups="true" maxKeepAliveRequests="100" maxProcessors="75"
>minProcessors="5" port="8080"
>protocolHandlerClassName="org.apache.coyote.http11.Http11Protocol"
>proxyPort="0" redirectPort="8443" scheme="http" secure="false"
>tcpNoDelay="true" useURIValidationHack="false">
>>       <Factory
>className="org.apache.catalina.net.DefaultServerSocketFactory"/>
>>     </Connector>
>>     <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
>acceptCount="10" bufferSize="2048" compression="off" connectionLinger="-1"
>connectionTimeout="0" debug="0" disableUploadTimeout="false"
>enableLookups="true" maxKeepAliveRequests="100" maxProcessors="75"
>minProcessors="5" port="8009"
>protocolHandlerClassName="org.apache.jk.server.JkCoyoteHandler"
>proxyPort="0" redirectPort="8443" scheme="http" secure="false"
>tcpNoDelay="true" useURIValidationHack="false">
>>       <Factory
>className="org.apache.catalina.net.DefaultServerSocketFactory"/>
>>     </Connector>
>>     <Engine className="org.apache.catalina.core.StandardEngine" debug="0"
>defaultHost="localhost"
>mapperClass="org.apache.catalina.core.StandardEngineMapper"
>name="Standalone">
>>       <Host className="org.apache.catalina.core.StandardHost"
>appBase="webapps" autoDeploy="true"
>configClass="org.apache.catalina.startup.ContextConfig"
>contextClass="org.apache.catalina.core.StandardContext" debug="0"
>deployXML="true"
>errorReportValveClass="org.apache.catalina.valves.ErrorReportValve"
>liveDeploy="true" mapperClass="org.apache.catalina.core.StandardHostMapper"
>name="localhost" unpackWARs="true">
>>         <Context className="org.apache.catalina.core.StandardContext"
>cachingAllowed="true"
>charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true"
>crossContext="false" debug="0" displayName="Tomcat Administration
>Application" docBase="../server/webapps/admin"
>mapperClass="org.apache.catalina.core.StandardContextMapper" path="/admin"
>privileged="true" reloadable="false" swallowOutput="false" useNaming="true"
>wrapperClass="org.apache.catalina.core.StandardWrapper">
>>           <Logger className="org.apache.catalina.logger.FileLogger"
>debug="0" directory="logs" prefix="localhost_admin_log." suffix=".txt"
>timestamp="true" verbosity="1"/>
>>         </Context>
>>         <Context className="org.apache.catalina.core.StandardContext"
>cachingAllowed="true"
>charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true"
>crossContext="false" debug="0" displayName="Webdav Content Management"
>docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\webdav"
>mapperClass="org.apache.catalina.core.StandardContextMapper" path="/webdav"
>privileged="false" reloadable="false" swallowOutput="false" useNaming="true"
>wrapperClass="org.apache.catalina.core.StandardWrapper">
>>         </Context>
>>         <Context className="org.apache.catalina.core.StandardContext"
>cachingAllowed="true"
>charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true"
>crossContext="true" debug="0" displayName="paralegal"
>docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\para"
>mapperClass="org.apache.catalina.core.StandardContextMapper" path="/para"
>privileged="false" reloadable="false" swallowOutput="false" useNaming="true"
>wrapperClass="org.apache.catalina.core.StandardWrapper">
>>           <Resource auth="Container" description="DB Connection"
>name="jdbc/mydb" scope="Shareable" type="javax.sql.DataSource"/>
>>         </Context>
>>         <Context className="org.apache.catalina.core.StandardContext"
>cachingAllowed="true"
>charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true"
>crossContext="true" debug="0" displayName="Tomcat Examples"
>docBase="examples"
>mapperClass="org.apache.catalina.core.StandardContextMapper"
>path="/examples" privileged="false" reloadable="true" swallowOutput="false"
>useNaming="true" wrapperClass="org.apache.catalina.core.StandardWrapper">
>>           <Logger className="org.apache.catalina.logger.FileLogger"
>debug="0" directory="logs" prefix="localhost_examples_log." suffix=".txt"
>timestamp="true" verbosity="1"/>
>>           <Parameter name="context.param.name" override="false"
>value="context.param.value"/>
>>           <Ejb home="com.wombat.empl.EmployeeRecordHome"
>name="ejb/EmplRecord" remote="com.wombat.empl.EmployeeRecord"
>type="Entity"/>
>>           <Ejb description="Example EJB Reference"
>home="com.mycompany.mypackage.AccountHome" name="ejb/Account"
>remote="com.mycompany.mypackage.Account" type="Entity"/>
>>           <Environment name="maxExemptions" override="true"
>type="java.lang.Integer" value="15"/>
>>           <Environment name="foo/name4" override="true"
>type="java.lang.Integer" value="10"/>
>>           <Environment name="minExemptions" override="true"
>type="java.lang.Integer" value="1"/>
>>           <Environment name="foo/bar/name2" override="true"
>type="java.lang.Boolean" value="true"/>
>>           <Environment name="name3" override="true"
>type="java.lang.Integer" value="1"/>
>>           <Environment name="foo/name1" override="true"
>type="java.lang.String" value="value1"/>
>>           <LocalEjb description="Example Local EJB Reference"
>home="com.mycompany.mypackage.ProcessOrderHome"
>local="com.mycompany.mypackage.ProcessOrder" name="ejb/ProcessOrder"
>type="Session"/>
>>           <Resource auth="SERVLET" name="jdbc/EmployeeAppDb"
>scope="Shareable" type="javax.sql.DataSource"/>
>>           <Resource auth="Container" name="mail/Session" scope="Shareable"
>type="javax.mail.Session"/>
>>           <ResourceParams name="jdbc/EmployeeAppDb">
>>             <parameter>
>>               <name>password</name>
>>               <value></value>
>>             </parameter>
>>             <parameter>
>>               <name>url</name>
>>               <value>jdbc:HypersonicSQL:database</value>
>>             </parameter>
>>             <parameter>
>>               <name>driverClassName</name>
>>               <value>org.hsql.jdbcDriver</value>
>>             </parameter>
>>             <parameter>
>>               <name>username</name>
>>               <value>sa</value>
>>             </parameter>
>>           </ResourceParams>
>>           <ResourceParams name="mail/Session">
>>             <parameter>
>>               <name>mail.smtp.host</name>
>>               <value>localhost</value>
>>             </parameter>
>>           </ResourceParams>
>>           <ResourceLink global="simpleValue" name="linkToGlobalResource"
>type="java.lang.Integer"/>
>>         </Context>
>>         <Context className="org.apache.catalina.core.StandardContext"
>cachingAllowed="true"
>charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true"
>crossContext="false" debug="0"
>docBase="c:/local/tomcat/jakarta-tomcat-4.1.24/webapps/application-examples"
>mapperClass="org.apache.catalina.core.StandardContextMapper"
>path="/application-examples" privileged="false" reloadable="false"
>swallowOutput="false" useNaming="true"
>wrapperClass="org.apache.catalina.core.StandardWrapper">
>>         </Context>
>>         <Context className="org.apache.catalina.core.StandardContext"
>cachingAllowed="true"
>charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true"
>crossContext="false" debug="0" displayName="Tomcat Documentation"
>docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\tomcat-docs"
>mapperClass="org.apache.catalina.core.StandardContextMapper"
>path="/tomcat-docs" privileged="false" reloadable="false"
>swallowOutput="false" useNaming="true"
>wrapperClass="org.apache.catalina.core.StandardWrapper">
>>         </Context>
>>         <Context className="org.apache.catalina.core.StandardContext"
>cachingAllowed="true"
>charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true"
>crossContext="true" debug="0" displayName="Shyly"
>docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\shyly"
>mapperClass="org.apache.catalina.core.StandardContextMapper" path="/shyly"
>privileged="false" reloadable="true" swallowOutput="false" useNaming="true"
>wrapperClass="org.apache.catalina.core.StandardWrapper">
>>         </Context>
>>         <Context className="org.apache.catalina.core.StandardContext"
>cachingAllowed="true"
>charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true"
>crossContext="false" debug="0" displayName="Tomcat Manager Application"
>docBase="../server/webapps/manager"
>mapperClass="org.apache.catalina.core.StandardContextMapper" path="/manager"
>privileged="true" reloadable="false" swallowOutput="false" useNaming="true"
>wrapperClass="org.apache.catalina.core.StandardWrapper">
>>           <ResourceLink global="UserDatabase" name="users"
>type="org.apache.catalina.UserDatabase"/>
>>         </Context>
>>         <Context className="org.apache.catalina.core.StandardContext"
>cachingAllowed="true"
>charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true"
>crossContext="false" debug="0" displayName="Welcome to Tomcat"
>docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\ROOT"
>mapperClass="org.apache.catalina.core.StandardContextMapper" path=""
>privileged="false" reloadable="false" swallowOutput="false" useNaming="true"
>wrapperClass="org.apache.catalina.core.StandardWrapper">
>>         </Context>
>>         <Logger className="org.apache.catalina.logger.FileLogger"
>debug="9" directory="logs" prefix="localhost_log." suffix=".txt"
>timestamp="true" verbosity="4"/>
>>       </Host>
>>       <Logger className="org.apache.catalina.logger.FileLogger" debug="0"
>directory="logs" prefix="catalina_log." suffix=".txt" timestamp="true"
>verbosity="1"/>
>>       <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
>debug="0" resourceName="UserDatabase" validate="true"/>
>>     </Engine>
>>   </Service>
>> </Server>
>>
>> At 02:12 PM 4/17/2003 -0400, you wrote:
>>
>> >Hi Shyly,
>> >
>> >It looks like you have everything right.
>> >
>> >You are not missing an environment variable, assuming you meant
>> >%CATALINA_HOME% and not %TOMCAT% or %CATALINA_HOM% below.
>> >
>> >Do you have the context entry in server.xml inside <host>?
>> >
>> >Also do you have the <resource-ref> in the right place in the web.xml
>file?
>> >Those entries have to be in the right order.
>> >
>> >It has to be after </error-page> and before <security-constraint>.
>> >
>> >Can you post (or send directly) you entire server.xml and web.xml files
>> >after sanitizing them?
>> >
>> >Rick
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
>> For additional commands, e-mail: tomcat-user-help@jakarta.apache.org
>>
>
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
>For additional commands, e-mail: tomcat-user-help@jakarta.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-user-help@jakarta.apache.org


RE: Re: How should permissions be set in a shared workspace environment?

Posted by James Oltmans <jo...@bolosystems.com>.

-----Original Message-----
From: Robert Wenner [mailto:robert@port25.com] 
Sent: Wednesday, April 11, 2007 6:03 PM
To: users@subversion.tigris.org
Subject: Re: How should permissions be set in a shared workspace
environment?

>James Oltmans wrote:
> We have tried giving them their own copies. They did not like the
20-30
>> mins rebuild turnaround time (we do not have a quick-build option, we
>> are working with UniData, not C++ or Java) and the fact that any time
>> they wanted to see a project team-member's contribution they needed
to
>> rebuild.
>
>If I get you right, your problem are build times.
>Can you check in the compiled files to avoid the long builds?
>Then everybody can have their own working copy.

The problem with compiled files is that the data may be specific to the
workspace. We wouldn't want absolute paths getting into the build
anymore than a lot of magic constants and paths to C:\My
Documents\bobjones\ getting into a Java build.

>> We also managed to fill up the hard drive on the server pretty
>> quickly with 2.5 gig a pop workspaces. 

>I do not understand what you mean here.
>Surely your server must have more than 2.5 GB hard disk capacity?

No we have a 300 GB machine but we have several developers and a lot of
projects. Each project's workspace eat's 2.5 GB+ of hard drive space
(even more if they want a lot of dev data).

>Confused,
>
>Robert

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org


Re: Configuring SSL on default Broker

Posted by Chris Odom <ch...@mediadriver.com>.
Never mind this is a bug. Come to find out that even though you set the
SslContext with in the activemq-broker.xml it is never used when creating
the SslSocketConnector for the Broker, so you see the behavior I am seeing.


On Tue, 24 Apr 2012 10:48:57 -0500, Chris Odom
<ch...@mediadriver.com>
wrote:
> Further debugging reveals that the SpringSSLContext is parsed correctly
> from the activemq-broker.xml but during the binding process I found
this:
> 
> org.apache.activemq.transport.TransportFactory
>     public static TransportServer bind(BrokerService brokerService, URI
> location) throws IOException {
>         TransportFactory tf = findTransportFactory(location);
>         if( brokerService!=null && tf instanceof BrokerServiceAware ) {
>             ((BrokerServiceAware)tf).setBrokerService(brokerService);
>         }
>         try {
>             if( brokerService!=null ) {
>                
> SslContext.setCurrentSslContext(brokerService.getSslContext());
>             }
>             return tf.doBind(location);
>         } finally {
>             SslContext.setCurrentSslContext(null);
>         }
>     }
> 
> org.apache.activemq.broker.SslContext
>     static public void setCurrentSslContext(SslContext bs) {
>         current.set(bs);
>     }
> 
> The TransportFactory calls setCurrentSslContext twice in which the
second
> call sets the sslcontext to null.
> 
> On Tue, 24 Apr 2012 09:43:03 -0500, Chris Odom
> <ch...@mediadriver.com>
> wrote:
>> Furthering debugging of the issue has revealed that the create
>> SslSocketConnector's sslContextFactory does not have the keyStore
> created
>> as in, its null. I have also noticed that when using either
> configuration
>> below the keyStorePassword is mucked up as well:
>> 
>> <!-- SSL context used for both http(s) and ssl transport -->
>>         <sslContext>
>>             <sslContext keyStore="${karaf.home}/etc/jsse/localhost.ks"
>> keyStorePassword="changeit" />
>>         </sslContext>
>> 
>> <!-- SSL context used for both http(s) and ssl transport -->
>>         <sslContext>
>>             <sslContext
>> keyStore="file:${karaf.home}/etc/jsse/localhost.ks"
>> keyStorePassword="changeit" />
>>         </sslContext>
>> 
>> With the above listed configurations the keyStorePassword ends up being
>> just the letter 't' and not 'changeit';
>> 
>> I am currently using apache-servicemix-4.4.1-fuse-03-06 and any help in
>> this would be deeply appreciated.
>> 
>> and yes the sslContext element is in A-Z order with in the broker
> element.
>> 
>> Thanks
>> Chris O.
>> 
>> 
>> On Mon, 23 Apr 2012 17:30:52 -0500, Chris Odom
>> <ch...@mediadriver.com>
>> wrote:
>>> I am currently trying to setup both a https and ssl transport
connector
>>> for the default broker. I am using servicemix deploying a blueprint
>> version
>>> of the activemq-broker.xml and have followed all how-to with no
> success.
>>> below is an excerpt of my broker.xml file for sslcontext
configuration:
>>> 
>>> 
>>> When I start update servicemix with in the console I get prompted with
>>> "org.eclipse.jetty.ssl.password : " 
>>> 
>>> If you attempt to type something in
>>> by the 3 character it just returns with out hitting enter and prompts
a
>>> second doing the exact same thing and then does not prompt any more.
>> With
>>> in the log file I see this after the second prompts occurs: 
>>> 
>>> 17:21:41,961
>>> | WARN | rint Extender: 3 | log | ? ? | 80 - org.eclipse.jetty.util -
>>> 7.4.5.fuse20111017 | FAILED
>> Krb5AndCertsSslSocketConnector@localhost:8443
>>> FAILED: java.lang.IllegalStateException: SSL context is not configured
>>> correctly. 
>>> 
>>> 17:21:41,961 | WARN | rint Extender: 3 | log | ? ? | 80 -
>>> org.eclipse.jetty.util - 7.4.5.fuse20111017 | FAILED
>>> org.eclipse.jetty.server.Server@2b76fbc2:
>> java.lang.IllegalStateException:
>>> SSL context is not configured correctly. 
>>> 
>>> 17:21:41,961 | ERROR | rint
>>> Extender: 3 | BrokerService | ? ? | 51 -
>> org.apache.activemq.activemq-core
>>> - 5.5.1.fuse-03-06 | Failed to start ActiveMQ JMS Message Broker
>> (default,
>>> null). Reason: java.lang.IllegalStateException: SSL context is not
>>> configured correctly. 
>>> 
>>> java.lang.IllegalStateException: SSL context is not
>>> configured correctly. 
>>> 
>>>  at
>>>
>>
>
org.eclipse.jetty.server.ssl.SslSocketConnector.doStart(SslSocketConnector.java:338)
>>> 
>>> 
>>>  at
>>>
>>
>
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:58)
>>> 
>>> 
>>>  at org.eclipse.jetty.server.Server.doStart(Server.java:269) 
>>> 
>>>  at
>>>
>>
>
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:58)
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.activemq.transport.http.HttpTransportServer.doStart(HttpTransportServer.java:94)
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.activemq.transport.https.HttpsTransportServer.doStart(HttpsTransportServer.java:71)
>>> 
>>> 
>>>  at
>>> org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:54) 
>>> 
>>>  at
>>>
>>
>
org.apache.activemq.broker.TransportConnector.start(TransportConnector.java:250)
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.activemq.broker.BrokerService.startTransportConnector(BrokerService.java:2206)
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.activemq.broker.BrokerService.startAllConnectors(BrokerService.java:2119)
>>> 
>>> 
>>>  at
>>> org.apache.activemq.broker.BrokerService.start(BrokerService.java:538)

>>> 
>>> 
>>> at
>>>
>>
>
org.apache.activemq.broker.BrokerService.autoStart(BrokerService.java:482)
>>> 
>>> 
>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>> Method)[:1.6.0_26] 
>>> 
>>>  at
>>>
>>
>
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)[:1.6.0_26]
>>> 
>>> 
>>>  at
>>>
>>
>
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)[:1.6.0_26]
>>> 
>>> 
>>>  at java.lang.reflect.Method.invoke(Method.java:597)[:1.6.0_26] 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.utils.ReflectionUtils.invoke(ReflectionUtils.java:226)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.container.BeanRecipe.invoke(BeanRecipe.java:824)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.container.BeanRecipe.runBeanProcInit(BeanRecipe.java:636)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.container.BeanRecipe.internalCreate(BeanRecipe.java:724)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.di.AbstractRecipe.create(AbstractRecipe.java:64)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.container.BlueprintRepository.createInstances(BlueprintRepository.java:219)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.container.BlueprintRepository.createAll(BlueprintRepository.java:147)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.container.BlueprintContainerImpl.instantiateEagerComponents(BlueprintContainerImpl.java:640)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.container.BlueprintContainerImpl.doRun(BlueprintContainerImpl.java:331)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.container.BlueprintContainerImpl.run(BlueprintContainerImpl.java:227)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)[:1.6.0_26]
>>> 
>>> 
>>>  at
>>>
>>
>
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)[:1.6.0_26]
>>> 
>>> 
>>>  at
java.util.concurrent.FutureTask.run(FutureTask.java:138)[:1.6.0_26]
>>> 
>>> 
>>>  at
>>>
>>
>
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)[:1.6.0_26]
>>> 
>>> 
>>>  at
>>>
>>
>
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)[:1.6.0_26]
>>> 
>>> 
>>>  at
>>>
>>
>
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)[:1.6.0_26]
>>> 
>>> 
>>>  at
>>>
>>
>
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)[:1.6.0_26]
>>> 
>>> 
>>>  at java.lang.Thread.run(Thread.java:662)[:1.6.0_26] 
>>> 
>>> 17:21:41,966 |
>>> INFO | rint Extender: 3 | BrokerService | ? ? | 51 -
>>> org.apache.activemq.activemq-core - 5.5.1.fuse-03-06 | ActiveMQ
Message
>>> Broker (default, null) is shutting down 
>>> 
>>> 17:21:41,967 | INFO | rint
>>> Extender: 3 | log | ? ? | 80 - org.eclipse.jetty.util -
>> 7.4.5.fuse20111017
>>> | stopped o.e.j.s.ServletContextHandler{/,null} 
>>> 
>>> 17:21:42,019 | INFO |
>>> rint Extender: 3 | TransportConnector | ? ? | 51 -
>>> org.apache.activemq.activemq-core - 5.5.1.fuse-03-06 | Connector jetty
>>> Stopped 
>>> 
>>> 17:21:42,019 | INFO | rint Extender: 3 | TransportConnector | ? ?
>>> | 51 - org.apache.activemq.activemq-core - 5.5.1.fuse-03-06 |
Connector
>> ssl
>>> Stopped 
>>> 
>>> 17:21:42,019 | INFO | rint Extender: 3 | TransportConnector | ? ?
>>> | 51 - org.apache.activemq.activemq-core - 5.5.1.fuse-03-06 |
Connector
>>> openwire Stopped 
>>> 
>>> 17:21:42,019 | INFO | rint Extender: 3 |
>>> TransportConnector | ? ? | 51 - org.apache.activemq.activemq-core -
>>> 5.5.1.fuse-03-06 | Connector stomp Stopped 
>>> 
>>> 17:21:42,023 | INFO | rint
>>> Extender: 3 | KahaDBStore | ? ? | 51 -
> org.apache.activemq.activemq-core
>> -
>>> 5.5.1.fuse-03-06 | Stopping async queue tasks 
>>> 
>>> 17:21:42,023 | INFO | rint
>>> Extender: 3 | KahaDBStore | ? ? | 51 -
> org.apache.activemq.activemq-core
>> -
>>> 5.5.1.fuse-03-06 | Stopping async topic tasks 
>>> 
>>> 17:21:42,023 | INFO | rint
>>> Extender: 3 | KahaDBStore | ? ? | 51 -
> org.apache.activemq.activemq-core
>> -
>>> 5.5.1.fuse-03-06 | Stopped KahaDB 
>>> 
>>> 17:21:42,318 | INFO | rint Extender: 3
>>> | BrokerService | ? ? | 51 - org.apache.activemq.activemq-core -
>>> 5.5.1.fuse-03-06 | ActiveMQ JMS Message Broker (default, null) stopped
>>> 
>>> 
>>> 17:21:42,319 | ERROR | rint Extender: 3 | BlueprintContainerImpl | ? ?
> |
>>> 10 - org.apache.aries.blueprint - 0.3.1 | Unable to start blueprint
>>> container for bundle activemq-broker.xml
>>> 
>>> 
>>> org.osgi.service.blueprint.container.ComponentDefinitionException:
>> Unable
>>> to intialize bean .component-2 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.container.BeanRecipe.runBeanProcInit(BeanRecipe.java:638)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.container.BeanRecipe.internalCreate(BeanRecipe.java:724)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.di.AbstractRecipe.create(AbstractRecipe.java:64)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.container.BlueprintRepository.createInstances(BlueprintRepository.java:219)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.container.BlueprintRepository.createAll(BlueprintRepository.java:147)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.container.BlueprintContainerImpl.instantiateEagerComponents(BlueprintContainerImpl.java:640)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.container.BlueprintContainerImpl.doRun(BlueprintContainerImpl.java:331)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.container.BlueprintContainerImpl.run(BlueprintContainerImpl.java:227)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)[:1.6.0_26]
>>> 
>>> 
>>>  at
>>>
>>
>
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)[:1.6.0_26]
>>> 
>>> 
>>>  at
java.util.concurrent.FutureTask.run(FutureTask.java:138)[:1.6.0_26]
>>> 
>>> 
>>>  at
>>>
>>
>
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)[:1.6.0_26]
>>> 
>>> 
>>>  at
>>>
>>
>
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)[:1.6.0_26]
>>> 
>>> 
>>>  at
>>>
>>
>
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)[:1.6.0_26]
>>> 
>>> 
>>>  at
>>>
>>
>
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)[:1.6.0_26]
>>> 
>>> 
>>>  at java.lang.Thread.run(Thread.java:662)[:1.6.0_26] 
>>> 
>>> Caused by:
>>> java.lang.IllegalStateException: SSL context is not configured
>> correctly.
>>> 
>>> 
>>>  at
>>>
>>
>
org.eclipse.jetty.server.ssl.SslSocketConnector.doStart(SslSocketConnector.java:338)
>>> 
>>> 
>>>  at
>>>
>>
>
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:58)
>>> 
>>> 
>>>  at org.eclipse.jetty.server.Server.doStart(Server.java:269) 
>>> 
>>>  at
>>>
>>
>
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:58)
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.activemq.transport.http.HttpTransportServer.doStart(HttpTransportServer.java:94)
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.activemq.transport.https.HttpsTransportServer.doStart(HttpsTransportServer.java:71)
>>> 
>>> 
>>>  at
>>> org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:54) 
>>> 
>>>  at
>>>
>>
>
org.apache.activemq.broker.TransportConnector.start(TransportConnector.java:250)
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.activemq.broker.BrokerService.startTransportConnector(BrokerService.java:2206)
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.activemq.broker.BrokerService.startAllConnectors(BrokerService.java:2119)
>>> 
>>> 
>>>  at
>>> org.apache.activemq.broker.BrokerService.start(BrokerService.java:538)

>>> 
>>> 
>>> at
>>>
>>
>
org.apache.activemq.broker.BrokerService.autoStart(BrokerService.java:482)
>>> 
>>> 
>>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>>> Method)[:1.6.0_26] 
>>> 
>>>  at
>>>
>>
>
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)[:1.6.0_26]
>>> 
>>> 
>>>  at
>>>
>>
>
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)[:1.6.0_26]
>>> 
>>> 
>>>  at java.lang.reflect.Method.invoke(Method.java:597)[:1.6.0_26] 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.utils.ReflectionUtils.invoke(ReflectionUtils.java:226)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.container.BeanRecipe.invoke(BeanRecipe.java:824)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  at
>>>
>>
>
org.apache.aries.blueprint.container.BeanRecipe.runBeanProcInit(BeanRecipe.java:636)[10:org.apache.aries.blueprint:0.3.1]
>>> 
>>> 
>>>  ... 15 more Any ideas on why this is happening or why this would
occur
>>> would be deeply appreciated.

-- 
Thanks,
Chris Odom
512:799-0270

Re: One service with operations in different namespaces

Posted by Marcin Okraszewski <ok...@o2.pl>.
At the beggining I'd like to thank you for help!

OK, this is WSDL document. Far I understand documentation,  the wsdlFile 
atribute in deployment is just for returning WSDL, not for Axis service 
configuration, isn't it? I'm afraid I will still get errors. Am I wrong?

One more time thanks for help!
Marcin

> You define the schemas in the <types> section of the WSDL document. For 
> example:
> 
> <wsdl:definitions name='twoNamepsaces'
>     targetNamespace='urn:twonamespaces/wsdl'
>     xmlns:soap='http://schemas.xmlsoap.org/wsdl/soap/'
>     xmlns:wsdl='http://schemas.xmlsoap.org/wsdl/'
>     xmlns:ns1='urn:twoNamespaces/ns1'
>     xmlns:ns2='urn:twoNamespaces/ns2'
>     xmlns:tns='urn:twoNamespaces/wsdl'>
> 
>     <wsdl:types>
>         <xsd:schema targetNamespace='urn:twoNamespaces/ns1'
>             xmlns:xsd='http://www.w3.org/2001/XMLSchema'>
>             <xsd:element name='op1' type='string'/>
>         </xsd:schema>
>         <xsd:schema targetNamespace='urn:twoNamespaces/ns2'
>             xmlns:xsd='http://www.w3.org/2001/XMLSchema'>
>             <xsd:element name='op2' type='string'/>
>         </xsd:schema>
>     </wsdl:types>
> 
>     <wsdl:message name='op1'>
>         <wsdl:part name='body' element='ns1:op1'/>
>     </wsdl:message>
>     <wsdl:message name='op2'>
>         <wsdl:part name='body' element='ns2:op2'/>
>     </wsdl:message>
> 
>     <wsdl:portType name='interface'>
>         <wsdl:operation name='op1'>
>             <wsdl:input message='tns:op1'/>
>         </wsdl:operation>
>         <wsdl:operation name='op2'>
>             <wsdl:input message='tns:op2'/>
>         </wsdl:operation>
>     </wsdl:portType>
> 
>     <wsdl:binding name='interfaceSOAP' type='tns:interface'>
>         <soap:binding
>             transport='http://schemas.xmlsoap.org/soap/http'
>             style='document'/>
>         <wsdl:operation name='op1'>
>             <soap:operation
>               soapAction='op1'
>               style='document'/>
>             <wsdl:input>
>                 <soap:body use='literal'/>
>             </wsdl:input>
>         </wsdl:operation>
>         <wsdl:operation name='op2'>
>             <soap:operation
>               soapAction='op2'
>               style='document'/>
>             <wsdl:input>
>                 <soap:body use='literal'/>
>             </wsdl:input>
>         </wsdl:operation>
>     </wsdl:binding>

-- 
-------------------------------------------------------------
                       Marcin Okraszewski
okrasz@o2.pl                                       GG: 341942
okrasz@vlo.ids.gda.pl          PGP: www.okrasz.prv.pl/pgp.asc
-------------------------------------------------------------



Re: problems with ojb 0.9.8

Posted by David Forslund <dw...@lanl.gov>.
Indeed.  It worked with with MySQL and I upgraded the hsqldb and it works 
with it, too!

Thanks,

Dave
At 10:56 PM 12/30/2002 +0100, Armin Waibel wrote:
>Damned! A hard nut to crack ;-)
>Now it seems to be a field configuration or a database problem.
>In both cases I'm not a expert.
>
><snip>
>/*
>      * @see Platform#setObject(PreparedStatement, int, Object, int)
>      */
>     public void setObjectForStatement(PreparedStatement ps, int index,
>Object value, int sqlType)
>             throws SQLException
>     {
>         if ((value instanceof String) && (sqlType == Types.LONGVARCHAR))
>         {
>             String s = (String) value;
>             ps.setCharacterStream(index, new StringReader(s),
>s.length());
>         }
>         else
>         {
>             ps.setObject(index, value, sqlType);
>         }
>     }
>
>setCharacterStream was called when LONGVARCHAR was used.
>Did you use the latest version of hsql? I think in current version there
>should be LONGVARCHAR fields supported.
>Maybe take a look in the repository_junit.xml for alternatives field
>type.
>Sorry, I couldn't help more.
>
>regards,
>Armin
>
>----- Original Message -----
>From: "David Forslund" <dw...@lanl.gov>
>To: "OJB Users List" <oj...@jakarta.apache.org>
>Sent: Monday, December 30, 2002 10:25 PM
>Subject: Re: problems with ojb 0.9.8
>
>
> > At 09:25 PM 12/30/2002 +0100, Armin Waibel wrote:
> > >Hi again,
> > >
> > > > > > >
> > > > > > > > I see what the problem is, but am not sure what the
>solution
> > >is.
> > > > > > > >
> > > > > > > > I have a an abstract class that is implemented with a
>number
> > >of
> > > > > > >classes.
> > > > > > > > I'm trying to create a unique key for an instance class,
>but
> > >when
> > > > >I
> > > > > > > > check there are no field descriptors for the base class.
> > > > > > >
> > > > > > >Have you tried
> > > > > > >Class realClass = abstractBaseClass.getClass();
> > > > > > >ClassDescriptor cld = broker.getClassDescriptor(realClass);
> > > > > > >to get the real class descriptor? Then it should possible to
>get
> > >the
> > > > > > >field.
> > > > > >
> > > > > > This doesn't help because I'm just calling the getUniqueId
>within
> > >OJB
> > > > > > and I don't have any control over what it does except through
> > > > > > the repository.
> > > > >
> > > > >
> > > > >I do not understand this. You declare your 'valueId' as a
> > >autoincrement
> > > > >field, but in your stack trace it seems you do a direct call
> > > > >PB.getUniqueId?
> > > >
> > > > Well I did add this because 0.9.8 was complaining about this field
> > >being
> > > > absent.  I have removed it without any change in the behavior.
> > > >
> > > >
> > > > > > >
> > > > >
> > >
> >>>org.apache.ojb.broker.singlevm.DelegatingPersistenceBroker.getUniqueI
> > > > > > > > >>> >
> > > > > >
> > >
> >gov.lanl.Database.OJBDatabaseMgr.getNextSeq(OJBDatabaseMgr.java:582)
> > > > > > > > >>> >          at
> > > > > > > > >>> >
> > > > > > > >
> > > > > > >
> > > > >
> > >
> >>>gov.lanl.COAS.AbstractObservationValue_.<init>(AbstractObservationVal
> > > > >
> > > > >Could you post a code snip to ease my understanding?
> > > > >But by the way this seems to be a bug.
> > > >
> > > > I'm not sure what you mean by a code snippet.  When I call the
>class
> > > > constructor,
> > > > I call getUniqueId with the class name and attribute:
> > > >
> > > > This is the specific method I call .
> > > > /**
> > > >       * return next number in a persistent sequence
> > > >       */
> > > >      public long getNextSeq(Class clazz, String fieldName) {
> > > >          cat.debug("getNextSeq: "+clazz.getName() + "
>"+fieldName);
> > > >          // return sequenceManager.getUniqueId(clazz, fieldName);
> > > >          try {
> > >                 // get the CLD for the base class
> > >                 ClassDescriptor cld =
>broker.getClassDescriptor(clazz);
> > >                 if( (cld.isAbstract || cld.isInterface()) &&
> > >cld.isExtent())
> > >                 {
> > >                     // get the first found extent class
> > >                       clazz = cld.getExtentClasses().get(0) // we
>grap
> > >the first
> > >                 }
> > >
> > > >              return broker.getUniqueId(clazz, fieldName);
> > > >          } catch (org.apache.ojb.broker.PersistenceBrokerException
>e)
> > >{
> > > >              cat.error("Can't get ID from broker: " +
>clazz.getName()
> > >+ " "
> > > > + fieldName, e);
> > > >
> > > >              // System.exit(1);
> > > >              return 0;
> > > >          }
> > > >      }
> > > >
> > >Maybe this could be a workaround for your problem.
> > >Keep in mind that getUniqueId(clazz, fieldName) was deprecated
> > >and will be replaced by getUniqueId(FieldDescriptor field).
> > >
> > >What I don't understand is, why you need a getNextSeq method,
> > >when you define autoincrement fields? OJB does all sequence key
> > >generation automatic for you.
> >
> > It didn't use to.  Plus we allow for other OR mapping tools and
> > need some level of control of ids, independent of the tool.
> >
> > OK.  I made this change and that problem went away, but another
> > one came up.  I now get an error from hsqldb that the requested
>function
> > is not supported:
> >
> > [org.apache.ojb.broker.accesslayer.JdbcAccess] ERROR: SQLException
>during
> > the ex
> > ecution of the insert (for a gov.lanl.COAS.String_): This function is
>not
> > supported
> > This function is not supported
> > java.sql.SQLException: This function is not supported
> >          at org.hsqldb.Trace.getError(Trace.java:180)
> >          at org.hsqldb.Trace.getError(Trace.java:144)
> >          at org.hsqldb.Trace.error(Trace.java:192)
> >          at
> > org.hsqldb.jdbcPreparedStatement.getNotSupported(jdbcPreparedStatemen
> > t.java:1602)
> >          at
> >
>org.hsqldb.jdbcPreparedStatement.setCharacterStream(jdbcPreparedStatemen
>t.java:1375)
> >          at
> >
>org.apache.ojb.broker.platforms.PlatformDefaultImpl.setObjectForStatemen
>t(PlatformDefaultImpl.java:216)
> >          at
> >
>org.apache.ojb.broker.accesslayer.StatementManager.bindInsert(StatementM
>anager.java:487)
> >          at
> >
>org.apache.ojb.broker.accesslayer.JdbcAccess.executeInsert(JdbcAccess.ja
>va:194)
> >          at
> >
>org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.storeToDb(Persisten
>ceBrokerImpl.java:1966)
> >          at
> > org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(Persistence
>BrokerImpl.java:1905)
> >          at
> >
>org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBr
>okerImpl.java:614)
> >          at
> >
>org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.storeReferences(Per
>sistenceBrokerImpl.java:641)
> >          at
> >
>org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.storeToDb(Persisten
>ceBrokerImpl.java:1938)
> >          at
> >
>org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBr
>okerImpl.java:1905)
> >          at
> >
>org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBr
>okerImpl.java:614)
> >          at
> >
>org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.storeCollectionObje
>ct(PersistenceBrokerImpl.java:789)
> >          at
> >
>org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.storeCollections(Pe
>rsistenceBrokerImpl.java:769)
> >          at
> >
>org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.storeToDb(Persisten
>ceBrokerImpl.java:1989)
> >          at
> >
>org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBr
>okerImpl.java:1905)
> >          at
> >
>org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBr
>okerImpl.java:614)
> >          at
> >
>org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBr
>okerImpl.java:588)
> >          at
> >
>org.apache.ojb.broker.singlevm.DelegatingPersistenceBroker.store(Delegat
>ingPersistenceBroker.java:123)
> >          at
> > gov.lanl.Database.OJBDatabaseMgr.insertElement(OJBDatabaseMgr.java:30
> >
> > Thanks,
> >
> > Dave
> >
> > >HTH
> > >regards,
> > >Armin
> > >
> > > > thanks,
> > > >
> > > > Dave
> > > >
> > > >
> > > > >Armin
> > > > >
> > > > >
> > > > > >
> > > > > > >Or define your base class with all fields in the repository
>file
> > >and
> > > > > > >declare
> > > > > > >all extent-classes in the class-descriptor. Then the default
> > >sequence
> > > > > > >manager implementations should be able to generate a id
>unique
> > > > > > >across all extents.
> > > > > > >Or define only the abstract class with all extent-classes,
>then
> > >you
> > > > > > >should be
> > > > > > >able to get one of the extent classes.
> > > > > >
> > > > > > This is how I have it defined in my repository_user.xml
> > > > > >
> > > > > >    <class-descriptor class="gov.lanl.COAS.ObservationValue_">
> > > > > >      <extent-class class-ref="gov.lanl.COAS.Multimedia_"/>
> > > > > >      <extent-class class-ref="gov.lanl.COAS.NoInformation_"/>
> > > > > >      <extent-class class-ref="gov.lanl.COAS.Numeric_"/>
> > > > > >      <extent-class class-ref="gov.lanl.COAS.ObservationId_"/>
> > > > > >      <extent-class
>class-ref="gov.lanl.COAS.QualifiedCodeInfo_"/>
> > > > > >      <extent-class
>class-ref="gov.lanl.COAS.QualifiedPersonId_"/>
> > > > > >      <extent-class class-ref="gov.lanl.COAS.Range_"/>
> > > > > >      <extent-class class-ref="gov.lanl.COAS.String_"/>
> > > > > >      <extent-class class-ref="gov.lanl.COAS.TimeSpan_"/>
> > > > > >      <extent-class
> > > > >class-ref="gov.lanl.COAS.UniversalResourceIdentifier_"/>
> > > > > >      <extent-class class-ref="gov.lanl.COAS.Empty_"/>
> > > > > >    </class-descriptor>
> > > > > >
> > > > > > and an example for one of the extent classes
> > > > > >
> > > > > >   <class-descriptor
> > > > > >         isolation-level="read-uncommitted"
> > > > > >         class="gov.lanl.COAS.Empty_"
> > > > > >         table="OjbEmpty_"
> > > > > >   >
> > > > > >     <field-descriptor id="1"
> > > > > >         name="valueId"
> > > > > >         jdbc-type="INTEGER"
> > > > > >         column="valueId"
> > > > > >         primarykey="true"
> > > > > >         autoincrement="true"
> > > > > >     />
> > > > > >
> > > > > >   </class-descriptor>
> > > > > >
> > > > > > there is no table for the ObservationValue_ class because it
>is an
> > > > >Abstract
> > > > > > Class.
> > > > > > this is what I've been using for 0.9.7 and it works fine.
>this
> > >fails
> > > > >under
> > > > > > 0.9.8
> > > > > > when trying to get a uniqueid for each of the extent classes.
>I
> > >think
> > > > >this
> > > > > > is what
> > > > > > you are describing in your last suggestion.
> > > > > >
> > > > > > thanks,
> > > > > > Dave
> > > > > >
> > > > > >
> > > > > > >HTH
> > > > > > >regards,
> > > > > > >Armin
> > > > > > >
> > > > > > > >
> > > > > > > > This all worked fine in 0.9.7, but perhaps there has been
>some
> > > > >change
> > > > > > > > in the semantics?  We put the necessary table elements in
>each
> > > > > > >instance
> > > > > > > > of the class but not in the table for the base class
>(which
> > > > >actually
> > > > > > > > doesn't exist).
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > >
> > > > > > > > Dave
> > > > > > > >
> > > > > > > > At 11:35 AM 12/30/2002 -0700, David Forslund wrote:
> > > > > > > > >When I put a check inside of the getFieldDescriptor, I
>find
> > >that
> > > > >it
> > > > > > >is
> > > > > > > > >being called by HighLowSequence
> > > > > > > > >with the argument ojbConcreteClass and is returning a
>null
> > >for
> > > > >the
> > > > > > > > >field.  Is this what is expected?
> > > > > > > > >
> > > > > > > > >Dave
> > > > > > > > >
> > > > > > > > >At 10:43 AM 12/30/2002 -0700, David Forslund wrote:
> > > > > > > > >>It wasn't null in my code that called the OJB code.
>This
> > >code
> > > > >has
> > > > > > >been
> > > > > > > > >>working fine in 0.9.7.    If the xml needed to change
>for
> > >some
> > > > > > >reason,
> > > > > > > > >>it might have caused this.  I'm passing in a string of a
> > > > >variable
> > > > > > >that
> > > > > > > > >>is defined in my table.   Whether OJB properly connects
>a
> > > > >"Field"
> > > > > > > > >>to that table is where the problem may be.   It did in
>the
> > >past
> > > > > > >without
> > > > > > > > >>any problem.   I have a hard time telling exactly what
> > >changed
> > > > > > >between
> > > > > > > > >>these two versions.
> > > > > > > > >>
> > > > > > > > >>Thanks,
> > > > > > > > >>
> > > > > > > > >>Dave
> > > > > > > > >>At 01:49 PM 12/30/2002 +0100, Armin Waibel wrote:
> > > > > > > > >>>Hi David,
> > > > > > > > >>>
> > > > > > > > >>>the sequence generator implementation now only generate
> > > > > > > > >>>id's for fields declared in the repository.
> > > > > > > > >>>I think you got this NullPointerException, because SM
>get a
> > > > > > > > >>>'null' field:
> > > > > > > > >>>
> > > > > > > > >>><snip SequenceManagerHelper>
> > > > > > > > >>>public static String buildSequenceName(
> > > > > > > > >>>PersistenceBroker brokerForClass, FieldDescriptor
>field)
> > > > > > > > >>>     {
> > > > > > > > >>>48--->!!! ClassDescriptor cldTargetClass =
> > > > > > >field.getClassDescriptor();
> > > > > > > > >>>                 String seqName =
>field.getSequenceName();
> > > > > > > > >>>.....
> > > > > > > > >>></snip>
> > > > > > > > >>>
> > > > > > > > >>>So check your code if the given FiledDescriptor wasn't
> > >null.
> > > > > > > > >>>
> > > > > > > > >>>HTH
> > > > > > > > >>>
> > > > > > > > >>>regards,
> > > > > > > > >>>Armin
> > > > > > > > >>>
> > > > > > > > >>>----- Original Message -----
> > > > > > > > >>>From: "David Forslund" <dw...@lanl.gov>
> > > > > > > > >>>To: "OJB Users List" <oj...@jakarta.apache.org>
> > > > > > > > >>>Sent: Monday, December 30, 2002 1:33 AM
> > > > > > > > >>>Subject: Re: problems with ojb 0.9.8
> > > > > > > > >>>
> > > > > > > > >>>
> > > > > > > > >>> > I'm trying to upgrade from 0.9.7 to 0.9.8 and am
>having
> > >some
> > > > > > >problems
> > > > > > > > >>>that
> > > > > > > > >>> > I don't understand yet.
> > > > > > > > >>> >
> > > > > > > > >>> > I'm getting the warning about not finding an
> > >autoincrement
> > > > > > >attribute
> > > > > > > > >>>for a
> > > > > > > > >>> > class.  I'm not sure when
> > > > > > > > >>> > I have to have an autoincrement attribute, but the
> > > > >primarykey
> > > > > > >for the
> > > > > > > > >>>class
> > > > > > > > >>> > I'm using is a varchar
> > > > > > > > >>> > so that autoincrement doesn't seem appropriate.
> > > > > > > > >>> >
> > > > > > > > >>> > Subsequently, I get an null pointer exception error
>in
> > >the
> > > > > > > > >>> > SequenceManagerHelper that I don't understand:
> > > > > > > > >>> > java.lang.NullPointerException
> > > > > > > > >>> >          at
> > > > > > > > >>> >
> > > > > > > >
> > > > > > >
> > > > >
> > >
> >>>org.apache.ojb.broker.util.sequence.SequenceManagerHelper.buildSequen
> > > > > > >ceN
> > > > > > > > >>>ame(SequenceManagerHelper.java:48)
> > > > > > > > >>> >          at
> > > > > > > > >>> >
> > > > > > > >
> > > > > > >
> > > > >
> > >
> >>>org.apache.ojb.broker.util.sequence.SequenceManagerHiLoImpl.getUnique
> > > > > > >Id(
> > > > > > > > >>>SequenceManagerHiLoImpl.java:49)
> > > > > > > > >>> >          at
> > > > > > > > >>> >
> > > > > > > >
> > > > > > >
> > > > >
> > >
> >>>org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.getUniqueId(Pers
> > > > > > >ist
> > > > > > > > >>>enceBrokerImpl.java:2258)
> > > > > > > > >>> >          at
> > > > > > > > >>> >
> > > > > > > >
> > > > > > >
> > > > >
> > >
> >>>org.apache.ojb.broker.singlevm.DelegatingPersistenceBroker.getUniqueI
> > > > > > >d(D
> > > > > > > > >>>elegatingPersistenceBroker.java:242)
> > > > > > > > >>> >          at
> > > > > > > > >>> >
> > > > > >
> > >
> >gov.lanl.Database.OJBDatabaseMgr.getNextSeq(OJBDatabaseMgr.java:582)
> > > > > > > > >>> >          at
> > > > > > > > >>> >
> > > > > > > >
> > > > > > >
> > > > >
> > >
> >>>gov.lanl.COAS.AbstractObservationValue_.<init>(AbstractObservationVal
> > > > > > >ue_
> > > > > > > > >>>.java:54)
> > > > > > > > >>> >          at
>gov.lanl.COAS.Empty_.<init>(Empty_.java:31)
> > > > > > > > >>> >
> > > > > > > > >>> > I'm pretty sure that it is being called correctly
>from
> > >my
> > > > >code
> > > > > > >(which
> > > > > > > > >>>works
> > > > > > > > >>> > fine in 0.9.7), but it is failing now.
> > > > > > > > >>> >
> > > > > > > > >>> > An unrelated warning in a different application is
>that
> > >OJB
> > > > >says
> > > > > > >I
> > > > > > > > >>>should
> > > > > > > > >>> > use addLike() for using LIKE, but it
> > > > > > > > >>> > seems to use the right code anyway.  Is this just a
> > > > >deprecation
> > > > > > >issue?
> > > > > > > > >>>I
> > > > > > > > >>> > don't see why it bothers
> > > > > > > > >>> > to tell me this, if it can figure what to do anyway.
> > > > > > > > >>> >
> > > > > > > > >>> > Thanks,
> > > > > > > > >>> >
> > > > > > > > >>> > Dave
> > > > > > > > >>> >
> > > > > > > > >>> >
> > > > > > > > >>> > --
> > > > > > > > >>> > To unsubscribe, e-mail:
> > > > > > > > >>><ma...@jakarta.apache.org>
> > > > > > > > >>> > For additional commands, e-mail:
> > > > > > > > >>><ma...@jakarta.apache.org>
> > > > > > > > >>> >
> > > > > > > > >>> >
> > > > > > > > >>> >
> > > > > > > > >>>
> > > > > > > > >>>
> > > > > > > > >>>--
> > > > > > > > >>>To unsubscribe, e-mail:
> > > > > > ><ma...@jakarta.apache.org>
> > > > > > > > >>>For additional commands, e-mail:
> > > > > > ><ma...@jakarta.apache.org>
> > > > > > > > >>
> > > > > > > > >>
> > > > > > > > >>--
> > > > > > > > >>To unsubscribe, e-mail:
> > > > > > ><ma...@jakarta.apache.org>
> > > > > > > > >>For additional commands, e-mail:
> > > > > > ><ma...@jakarta.apache.org>
> > > > > > > > >
> > > > > > > > >
> > > > > > > > >--
> > > > > > > > >To unsubscribe, e-mail:
> > > > > > ><ma...@jakarta.apache.org>
> > > > > > > > >For additional commands, e-mail:
> > > > > > ><ma...@jakarta.apache.org>
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > To unsubscribe, e-mail:
> > > > > > ><ma...@jakarta.apache.org>
> > > > > > > > For additional commands, e-mail:
> > > > > > ><ma...@jakarta.apache.org>
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >--
> > > > > > >To unsubscribe, e-mail:
> > > > ><ma...@jakarta.apache.org>
> > > > > > >For additional commands, e-mail:
> > > > ><ma...@jakarta.apache.org>
> > > > > >
> > > > > >
> > > > > > --
> > > > > > To unsubscribe, e-mail:
> > > > ><ma...@jakarta.apache.org>
> > > > > > For additional commands, e-mail:
> > > > ><ma...@jakarta.apache.org>
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >--
> > > > >To unsubscribe, e-mail:
> > ><ma...@jakarta.apache.org>
> > > > >For additional commands, e-mail:
> > ><ma...@jakarta.apache.org>
> > > >
> > > >
> > > > --
> > > > To unsubscribe, e-mail:
> > ><ma...@jakarta.apache.org>
> > > > For additional commands, e-mail:
> > ><ma...@jakarta.apache.org>
> > > >
> > > >
> > > >
> > >
> > >
> > >--
> > >To unsubscribe, e-mail:
><ma...@jakarta.apache.org>
> > >For additional commands, e-mail:
><ma...@jakarta.apache.org>
> >
> >
> > --
> > To unsubscribe, e-mail:
><ma...@jakarta.apache.org>
> > For additional commands, e-mail:
><ma...@jakarta.apache.org>
> >
> >
> >
>
>
>--
>To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
>For additional commands, e-mail: <ma...@jakarta.apache.org>


RE: Sa -- lint : HOWTO know which cf file gives the problem ?

Posted by Matthias Fuhrmann <Ma...@stud.uni-hannover.de>.
On Fri, 26 Jan 2007, Florent Gilain wrote:

> Hummm thanks a lot, it was finally easyer than i was thinking  ;-))
>
> Florent
>
[...]
> 70_zmi_german.cf:score    ZMIde_SUBBIG 1.8
>
> so the file containing the rule is 70_zmi_german.cf in the current
> directory.

you are welcome :)

regards,
Matthias

Re: Apache::Session::MySQL, light/heavy proxy, wedging

Posted by to...@tuxteam.de.
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Wed, Jan 31, 2007 at 08:29:55AM -0500, Perrin Harkins wrote:
> On 1/31/07, Todd Finney <tf...@boygenius.com> wrote:
> >Wouldn't throwing a
> >
> >         return DECLINED unless $r->is_initial_req;
> >
> >at the top of the handler fix the problem, in that case?
> 
> Probably, if you don't actually need this handler to run for the final
> URI.  What's the purpose of the handler?
> 
> >That occurred to me, and one of the first things that I tried was something
> >like this:
> >
> >         my $temp_session=$session{_session_id};
> >         $r->pnotes('SESSION_ID', $temp_session);
> >
> >It didn't change anything
> 
> It was worth a shot.  I really thought that would do it though.

If I understood that correctly (which I might quite well not!), if
pnotes takes a ref to whatever is put in, it'll take a ref to the
$session this way too. Maybe you want to put the _session_id in there
(but then, you'll have to cope with the case that the corresponding
session might disapper -- kind of a weak reference).

Regards
- -- tomás
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.1 (GNU/Linux)

iD8DBQFFwX7KBcgs9XrR2kYRAkBCAJ9+0KGWQrStyS/igg3VEk6E46+TbgCfSQcy
YeCsiFXgo5swBMGtJ+7vgL0=
=PKFS
-----END PGP SIGNATURE-----


Re: Apache::Session::MySQL, light/heavy proxy, wedging

Posted by Perrin Harkins <ph...@gmail.com>.
On 1/31/07, Todd Finney <tf...@boygenius.com> wrote:
> Wouldn't throwing a
>
>          return DECLINED unless $r->is_initial_req;
>
> at the top of the handler fix the problem, in that case?

Probably, if you don't actually need this handler to run for the final
URI.  What's the purpose of the handler?

> That occurred to me, and one of the first things that I tried was something
> like this:
>
>          my $temp_session=$session{_session_id};
>          $r->pnotes('SESSION_ID', $temp_session);
>
> It didn't change anything

It was worth a shot.  I really thought that would do it though.  Maybe
in your real code there's some additional line that keeps $session
from going out of scope.  You could explicitly undef it at the end of
your HeaderParserHandler if that's the case.

- Perrin

Re: Apache::Session::MySQL, light/heavy proxy, wedging

Posted by Todd Finney <tf...@boygenius.com>.
At 12:30 AM 1/31/2007 -0500, Perrin Harkins wrote:
>As for what's going wrong, my guess is that it has to do with the
>internal redirects that happen when you access / as opposed to
>/index.phtml.  You are trying to open the session in the
>HeaderParserHandler phase, so it's going to open a session, then do an
>internal redirect, and try to open the same session again, effectively
>deadlocking.

Wouldn't throwing a

         return DECLINED unless $r->is_initial_req;

at the top of the handler fix the problem, in that case?

>That's a 2.0 doc, but it applies to 1.0 as well: pnotes() increases
>the reference count to $session rather than copying it, so it doesn't
>get destroyed until after the internal redirect has completed and
>pnotes gets torn down.  If you use a temporary variable to hold the
>_session_id key, this will not happen and that may fix your problem.

That occurred to me, and one of the first things that I tried was something 
like this:

         my $temp_session=$session{_session_id};
         $r->pnotes('SESSION_ID', $temp_session);

It didn't change anything, so I decided that either (a) that wasn't the 
problem, or (b) after years of doing this I *still* don't fundamentally 
understand references.  Either conclusion made me uncomfortable, so I went 
looking for other potential solutions.



Re: Registration Opens for ApacheCon 2003

Posted by Hans Kind <hk...@flyingservers.nl>.
According to the Alexis web site, 
http://alexispark.com/rooms/rooms.htm  all rooms have high speed Internet.

That's what they said on the Linux World Expo web site regarding the Argent 
hotel for the linuxworld expo held last august in San Francisco.

Turned out there was no high speed internet for the rooms booked trough 
linux world, and they even charged me for local phone calls.

Anyway, high speed internet would be great.

rgda

Hans

At 12:07 15-9-2003 -0400, you wrote:
>When I was at the Alexis for DEF CON a great many of the rooms did have 
>high-speed access directly in the rooms for a set rate per 24-hr period.
>
>I'm not sure how many rooms have this feature, however.
>
>
>--
>B.K. DeLong
>bkdelong@pobox.com
>+1.617.797.2472
>
>http://ocw.mit.edu                           Work.
>http://www.brain-stream.com               Play.
>http://www.the-leaky-cauldron.org        Potter.
>http://www.city-of-doors.com               Sigil
>
>PGP Fingerprint:
>38D4 D4D4 5819 8667 DFD5  A62D AF61 15FF 297D 67FE
>
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: discuss-unsubscribe@ApacheCon.Com
>For additional commands, e-mail: discuss-help@ApacheCon.Com
>

Re: Registration Opens for ApacheCon 2003

Posted by "B.K. DeLong" <bk...@pobox.com>.
At 05:49 PM 9/15/2003 +0200, Lars Eilebrecht wrote:
>No, but we will be providing a wireless network for the conference.
>We will be trying to make it available in as many rooms as possible,
>but it is unlikely that we can do it for all rooms.

When I was at the Alexis for DEF CON a great many of the rooms did have 
high-speed access directly in the rooms for a set rate per 24-hr period.

I'm not sure how many rooms have this feature, however.


--
B.K. DeLong
bkdelong@pobox.com
+1.617.797.2472

http://ocw.mit.edu                           Work.
http://www.brain-stream.com               Play.
http://www.the-leaky-cauldron.org        Potter.
http://www.city-of-doors.com               Sigil

PGP Fingerprint:
38D4 D4D4 5819 8667 DFD5  A62D AF61 15FF 297D 67FE


Re: ssl-authorities-file

Posted by John Locke <ma...@freelock.com>.
jstewart@pobox.com wrote:

> Thanks for replying.
>
>
>> You're confusing the meaning of 'ssl-authorities-file'.  It means, "which
>> CA's do I trust?"   It's supposed point to the certificate of the 
>> *CA* that
>> signed the server cert, not to the server cert itself.
>
>
> I'll not dispute this. However, my certificate is signed by GeoTrust. 
> I went to their website (www.geotrust.com <http://www.geotrust.com/>) 
> and downloaded their certificate. I changed by servers file to point 
> to it and still no joy.
>
You're still confusing the Certificate Authority with the Certificate.

You state in your original email:

> As best as I can understand it, this is caused because my certificate 
> has the name "foobar.net" in it but the actual name is "dev.foobar.net".

That sounds like the problem.

In your configuration, all you did was tell Subversion to trust the 
Geotrust Certificate Authority to authenticate server certificates--but 
your server certificate doesn't match the server, so Subversion 
continues to fail.

You can either generate a certificate for dev.foobar.net, sign by 
Geotrust (or create your own CA to sign your certificates, and copy your 
CA's certificate to c:\foobar.net.crt), and install it in your web 
server, or use the ssl-ignore-host-mismatch option in Subversion.

Cheers,
John

P.S. This sounds like it belongs on the Users list, not the Dev list...


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Apache::Session::MySQL, light/heavy proxy, wedging

Posted by Jonathan Vanasco <jv...@2xlp.com>.
On Jan 30, 2007, at 6:44 PM, Todd Finney wrote:

> The bigger, fancier test case checks all of that.

ok.  great.   sorry for assuming you didn't have that already -- just  
have to be sure.

> What I posted was a simplified test case, in order to demonstrate  
> the problem in as few lines of code as possible.  I even used as a  
> base a code section that should be "known good", as it appears in  
> the perldoc for the manual.  I thought that I made this pretty  
> clear, I'll try harder next time.

well, you said fancy crap.  i didn't know what kind of fancy crap.

> Problems such as Apache::Session timing out are unlikely to be the  
> culprit, as the problem is reliably reproducible under narrow,  
> specific circumstances as outlined in my original message.   
> Sessions created under the successful cases never fail, and  
> sessions created under the failure cases never succeed.  Removing  
> the single line in question causes all requests to succeed.

honestly, you'd be surprised.  i've seen tons of odd issues with it.

there's definitely an issue with the tied variable then.  i've had  
that happen before.  i can't seem to remember what the issue was.

I'm going through the old listserv articles right now

this *might* give you some ideas
	http://www.issociate.de/board/post/309410/ 
Apache::Session::MySQL_lock_troubles.html

a bunch of the '(X-No-Archive: yes)' deleted posts in that are from  
you :)

i remember debugging Apache::Session::MySQL at some point myself ,  
and that gave solved some issues.  I can't seem to remember where the  
issue was.  I do recall having the same problem as you at some point  
though :(

// Jonathan Vanasco

| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -  
- - - - - - - - - - - - - - - - - - -
| SyndiClick.com
| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -  
- - - - - - - - - - - - - - - - - - -
|      FindMeOn.com - The cure for Multiple Web Personality Disorder
|      Web Identity Management and 3D Social Networking
| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -  
- - - - - - - - - - - - - - - - - - -
|      RoadSound.com - Tools For Bands, Stuff For Fans
|      Collaborative Online Management And Syndication Tools
| - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -  
- - - - - - - - - - - - - - - - - - -



RE: Release/Branching best practices

Posted by Ian Wood <Ia...@sucden.co.uk>.
>current solution is to require that developers specify in their log
messages (enforced via a pre-commit hook) which release their
fix/project belongs to. This should help us scrape the log messages to
identify which projects and bugs went into a release.

 

We do a similar thing using a prefix before each commit. 

 

In the past we also exported the log of each path, i.e. branch to see
what work was done in there using the prefix to group commits together. 

 

I noticed the other day that you can link TSVN with issue trackers -
maybe that will be a way to go. 

 

Another difference we have is that we release from Tags rather than the
Trunk. 

________________________________

From: James Oltmans [mailto:JOltmans@bolosystems.com] 
Sent: 29 November 2007 01:45
To: Ian Wood
Cc: users@subversion.tigris.org
Subject: RE: Release/Branching best practices

 

Thanks for your response Ian.

Currently we do something like in picture at the bottom. We spawn
projects (p1222) as branches off of releases (prd_7.2.2).

Releases are cut from the trunk. Bugs and projects are moved to the
trunk when they are complete and approved. Our issue is that we have
trouble identifying what is a part of each release. For instance, if we
had 12 bugs come from the bugs branch and 2 projects come in then cut
our release and moved 3 bugs in to the trunk and then found issues with
the release. We fix the release and then merge back to the trunk but
this starts to get convoluted as to which commits belong to which
release. Our current solution is to require that developers specify in
their log messages (enforced via a pre-commit hook) which release their
fix/project belongs to. This should help us scrape the log messages to
identify which projects and bugs went into a release. 

 

Our other issue is keeping the bugs branch and trunk in-synch. With each
release we merge everything over to bugs (generally this will reapply
fixed bugs and move projects over). However, there's never a guarantee
that it won't screw up the bugs team's development process.

 

Note: We used to have a separate QA branch to keep the trunk always
stable for spawning projects, but there's no point in doing that since
we now spawn projects off the required release branch.

 
<http://sp1/rd/scm/Help%20images/Source%20PNGs/Merge%20Outline%20Detaile
d_v3.0.png> 

 

________________________________

From: Ian Wood [mailto:Ian.Wood@sucden.co.uk] 
Sent: Wednesday, November 28, 2007 2:33 AM
To: James Oltmans; users@subversion.tigris.org
Subject: RE: Release/Branching best practices

 

Hi James,

 

This is how we do it. 

 

We have a repo as below.

 

>Trunk

>Branches

 >Versions

  >1_0_0

  >1_0_1

>Tags

 >SuccessfulBuilds

 >1_0_0

  >1_0_0_1

  >1_0_0_2

 

The main work is done on the Trunk. Then each month we make a Version
branch of the current months version, ( just the first three numbers,
the forth is determined by the CruiseControl machine ).

 

The code on this version branch is released to the test team and tested
and any bugs found are then fixed on that branch and released again. 

 

Then when that code is released to live the changes made are merged back
to the Trunk and another branch is taken. 

 

Indecently each time the Version branch builds successfully a Tag is
taken with the version number and a deployment script is created. 

 

We are not finding it too burdensome, the only problem we have found is
when people make changes to the same code in both places without merging
as they go. 

 

What are currently doing?

 

Best regards,

 

Ian

 

 

 

 

 

________________________________

From: James Oltmans [mailto:JOltmans@bolosystems.com] 
Sent: 28 November 2007 00:55
To: users@subversion.tigris.org
Subject: Release/Branching best practices

 

Hello all,

 

Could someone point me in the right direction for finding best-practices
or software to manage releases? We are trying to use a monthly release
cycle and our current branch and merge management is becoming a bit
burdensome.

 

Thanks,
James

 

www.sucden.co.uk <http://www.sucden.co.uk/> 

Sucden (UK) Limited, 5 London Bridge Street, London SE1 9SG
Telephone +44 20 7940 9400
 
Registered in England no. 1095841
VAT registration no. GB 446 9061 33

Authorised and Regulated by the Financial Services Authority (FSA) and
entered in the FSA register under no. 114239

 

This email, including any files transmitted with it, is confidential and
may be privileged. It may be read, copied and used only by the intended
recipient. If you are not the intended recipient of this message, please
notify postmaster@sucden.co.uk immediately and delete it from your
computer system.

 

We believe, but do not warrant, that this email and its attachments are
virus-free, but you should check. 

 

Sucden (UK) Ltd may monitor traffic data of both business and personal
emails. By replying to this email, you consent to Sucden's monitoring
the content of any emails you send to or receive from Sucden. Sucden is
not liable for any opinions expressed by the sender where this is a
non-business email.

The contents of this e-mail do not constitute advice and should not be
regarded as a recommendation to buy, sell or otherwise deal with any
particular investment.

This message has been scanned for viruses by BlackSpider MailControl
<http://www.blackspider.com/> 


Re: Externals using absolute path on Windows

Posted by Philip Martin <ph...@wandisco.com>.
David Hickman <da...@audleytravel.com> writes:

> David Hickman <da...@audleytravel.com> writes:
>
>> Is anyone aware of any work arounds for issue number 4073?  I guess 
>> using relative paths is likely to work but that would require changes 
>> to a number of our projects that we would rather not tackle at this 
>> time.  Are there any special characters or special methods of 
>> formatting the path that I can use to get the externals to work?  If 
>> not does anyone have any idea about the timescales for the version in 
>> which this issue is likely to be resolved?
>
> Allowing absolute paths in older versions on Windows was inadvertant.
> Adding support is not currently planned since allowing the server to
> place files at known locations outside the working copy has security
> implications.
>
> --
> Philip
>
>
>>>>>>>>
>
> Hi Philip,
>
> Thanks for your reply.  I received a response on the Tortoise SVN
> mailing list that this bug had already been fixed in trunk and would
> be back ported to 1.7 branch.  Is it a definite that support for
> absolute paths will not exist in the future?

The issue was fixed by making Windows behave like Unix and so give an
"invalid property" error rather than an assert.  Adding support for
absolute paths is not currently planned due to the security issue.

-- 
Philip

Re: Help with SUM function

Posted by David Fisher <df...@jmlafferty.com>.
Hi Jim,

Cell names should work fine, but you may want to use Apache POI 3.7b2 as there have been bug fixes.

http://poi.apache.org/spreadsheet/quick-guide.html#NamedRanges

Named cells are really convenient. Here is a code fragment that fills a HashMap with numeric values from all the named cells on a worksheet.

        Workbook wb = new HSSFWorkbook(new ByteArrayInputStream(bytes));
        Sheet sh = wb.getSheetAt(0);
        FormulaEvaluator wbeval = wb.getCreationHelper().createFormulaEvaluator();

        DecimalFormat fmt = new DecimalFormat("###.00");

        //collect tagged cells into a map
        Map<String, String> model = new HashMap<String, String>();
        for (int i = 0; i < wb.getNumberOfNames(); i++) {
            Name nm = wb.getNameAt(i);
            if(nm.isDeleted()) continue;

            String key = nm.getNameName();

            String nameFmla = nm.getRefersToFormula();
            CellReference ref = new CellReference(nameFmla);

            Row row = sh.getRow(ref.getRow());
            if (row != null) {
                Cell cell = row.getCell(ref.getCol());
                if (cell != null) {
                    try {
                        // try to evaluate the cell
                        CellValue cv = wbeval.evaluate(cell);
                        if (cv != null && cv.getCellType() == Cell.CELL_TYPE_NUMERIC) {
                            double dval = cv.getNumberValue();
                            model.put(key, fmt.format(dval));
                        }
                    } catch (RuntimeException e){
                        // YK: catch any errors thrown by the formula evaluator
                        // the safe fallback is to retrieve the cached formula result
                        if (cell.getCachedFormulaResultType() == Cell.CELL_TYPE_NUMERIC){ //ensure that the cell is numeric
                            double dval = cell.getNumericCellValue();
                            model.put(key, fmt.format(dval));
                        }
                    }
                }
            }
        }

It works currently with poi-3.6-20091214, Tomcat 6 and Java 6.

Note this includes formula evaluation. Errors from the formula evaluator such as external references which POI will not follow are caught and the result cached by Excel is used instead.

(Thanks Yegor! Probably this ought to be added to the poi documentation on the busy developer's guide.)

Regards,
Dave

On Aug 31, 2010, at 2:06 PM, Jim Bury wrote:

> Could I do something with cell names? I haven't been able to get it to keep the cell names when I generate the spreadsheet... Is there a trick or is it not doable?
> 
> Jim
> 
> -----Original Message-----
> From: Michael Zalewski [mailto:zalewski@optonline.net] 
> Sent: Tuesday, August 31, 2010 3:13 PM
> To: user@poi.apache.org
> Subject: Re: Help with SUM function
> 
> Sounds like you are having a column with Subtotals and Grand Totals. The SUM
> function that yields your grand total does not need to pick out ranges. Just run
> the SUM function over the entire column
> 
> For example
>   A             B
> 1 Supplier #1   1.00 
> 2               2.00
> 3 Subtotal      @SUM(B1:B2)
> 4 Supplier #2   3.00
> 5               4.00
> 6 Subtotal      @SUM(B4:B5)
> 7 GRAND TOTAL   @SUM(B1:B5)
> 
> You would think that the GRAND TOTAL would be double the correct result, because
> it looks like the formula includes the subtotals at B3 and B6. But such is not
> the case. The SUM function will ignore cells which contain subtotals from cells
> already included in the SUM.
> 
> I'm not sure that the POI Formula Evaluator behaves this way. But Excel does.
> 
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@poi.apache.org
> For additional commands, e-mail: user-help@poi.apache.org
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@poi.apache.org
> For additional commands, e-mail: user-help@poi.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@poi.apache.org
For additional commands, e-mail: user-help@poi.apache.org


Re: Re: Cannot load JDBC driver class

Posted by Rick Fincher <rn...@tbird.com>.
> Um, is TOMCAT supposed to be an environment variable?

No, not for Tomcat anyway.

It sounds like you have the usual problem stuff right, so it's got to be in
the setup in the DataSource end of it.

I see a couple of things here.  First off you are not using any connection
pooling, does the Sybase JDBC driver pool internally?

If not, I'd suggest using the setup in the JNDI DataSource HowTo with DBCP.
It just involves dropping the jar files listed in the HowTo into common/lib.
Your performance will be much better.

I wouldn't use Administrator yet to do this type of setup, it is still not
100%.  Set up your server.xml file manually.

Also, is the Sybase driver type 4? You may need a type 4 driver to connect
directly as a DataSource, but DBCP will use earlier drivers and provide the
necessary interfaces in its DataSource Factory.

Also, the examples for MySQL with DBCP do not use the Global JNDI area and
they worked for me first time even though I use a different database.  I
just changed the driver name and URL.

If you prefer to use the Sybase driver unpooled, try moving all your stuff
into the context and out of the Global JNDI area in server.xml.

Also try changing your Java code (I don't remember which form you used) from
this:

// Obtain our environment naming context
Context initCtx = new InitialContext();
Context envCtx = (Context) initCtx.lookup("java:comp/env");

// Look up our data source
DataSource ds = (DataSource)
  envCtx.lookup("jdbc/EmployeeDB");

// Allocate and use a connection from the pool
Connection conn = ds.getConnection();
... use this connection to access the database ...
conn.close();
To this:


try{
      Context ctx = new InitialContext();
      if(ctx == null )
          throw new Exception("Boom - No Context");

      DataSource ds =
            (DataSource)ctx.lookup("java:comp/env/jdbc/TestDB");

      if (ds != null) {
        Connection conn = ds.getConnection();


Hope this helps!

Rick

>
> Everything else (sevlet, taglib) seems to work fine.  fyi, I set up the
datasource using the tomcat administrator under Resources -> Datasources.
(I also tried editing server.xml manually to create it, but got an error
there as well.)  Also worth noting, when I go through the administrator to
Tomcat server -> Server -> Host -> Context (/para) -> Resources ->
Datasources, I get an error message "org.apache.jasper.JasperException:
Exception retrieving attribute 'driverClassName'" which is confusing since
that attribute is defined in server.xml.  This error message isn't in any of
the other webapps.  If I take the <resource-ref> code out of web.xml, that
error message goes away too.
>
> You're being very patient and helpful - thank you very much!
>
> Here is webapps\para\web-inf\web.xml
> <?xml version="1.0" encoding="ISO-8859-1"?>
> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application
2.3//EN"
>     "http://java.sun.com/dtd/web-app_2_3.dtd">
> <web-app>
>      <display-name>paralegal</display-name>
>      <description>Paralegal status</description>
>      <servlet>
>           <servlet-name>
>                LawSchools
>           </servlet-name>
>           <servlet-class>
>                LawSchools
>           </servlet-class>
>      </servlet>
>     <servlet-mapping>
>         <servlet-name>
>             LawSchools
>         </servlet-name>
>         <url-pattern>
>             /LawSchools
>         </url-pattern>
>     </servlet-mapping>
>      <taglib>
>
<taglib-uri>http://jakarta.apache.org/taglibs/application-1.0</taglib-uri>
>           <taglib-location>/WEB-INF/c.tld</taglib-location>
>      </taglib>
>   <resource-ref>
>       <description>My DB Connection</description>
>       <res-ref-name>jdbc/mydb</res-ref-name>
>       <res-type>javax.sql.DataSource</res-type>
>       <res-auth>Container</res-auth>
>   </resource-ref>
> </web-app>
> ************************
> And this is server.xml
>
> <?xml version='1.0' encoding='utf-8'?>
> <Server className="org.apache.catalina.core.StandardServer" debug="0"
port="8005" shutdown="SHUTDOWN">
>   <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener"
debug="0" jsr77Names="false"/>
>   <Listener
className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener"
debug="0"/>
>   <GlobalNamingResources>
>     <Environment name="simpleValue" override="true"
type="java.lang.Integer" value="30"/>
>     <Resource auth="Container" description="User database that can be
updated and saved" name="UserDatabase" scope="Shareable"
type="org.apache.catalina.UserDatabase"/>
>     <Resource name="jdbc/profsysbackup" scope="Shareable"
type="javax.sql.DataSource"/>
>     <ResourceParams name="UserDatabase">
>       <parameter>
>         <name>factory</name>
>         <value>org.apache.catalina.users.MemoryUserDatabaseFactory</value>
>       </parameter>
>       <parameter>
>         <name>pathname</name>
>         <value>conf/tomcat-users.xml</value>
>       </parameter>
>     </ResourceParams>
>     <ResourceParams name="jdbc/mydb">
>       <parameter>
>         <name>maxWait</name>
>         <value>5000</value>
>       </parameter>
>       <parameter>
>         <name>maxActive</name>
>         <value>2</value>
>       </parameter>
>       <parameter>
>         <name>password</name>
>         <value>xxx</value>
>       </parameter>
>       <parameter>
>         <name>url</name>
>         <value>jdbc:sybase:Tds:xxx:5000</value>
>       </parameter>
>       <parameter>
>         <name>driverClassName</name>
>         <value>com.sybase.jdbc2.jdbc.SybDriver</value>
>       </parameter>
>       <parameter>
>         <name>maxIdle</name>
>         <value>2</value>
>       </parameter>
>       <parameter>
>         <name>username</name>
>         <value>xxx</value>
>       </parameter>
>     </ResourceParams>
>   </GlobalNamingResources>
>   <Service className="org.apache.catalina.core.StandardService" debug="0"
name="Tomcat-Standalone">
>     <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
acceptCount="100" bufferSize="2048" compression="off" connectionLinger="-1"
connectionTimeout="20000" debug="0" disableUploadTimeout="true"
enableLookups="true" maxKeepAliveRequests="100" maxProcessors="75"
minProcessors="5" port="8080"
protocolHandlerClassName="org.apache.coyote.http11.Http11Protocol"
proxyPort="0" redirectPort="8443" scheme="http" secure="false"
tcpNoDelay="true" useURIValidationHack="false">
>       <Factory
className="org.apache.catalina.net.DefaultServerSocketFactory"/>
>     </Connector>
>     <Connector className="org.apache.coyote.tomcat4.CoyoteConnector"
acceptCount="10" bufferSize="2048" compression="off" connectionLinger="-1"
connectionTimeout="0" debug="0" disableUploadTimeout="false"
enableLookups="true" maxKeepAliveRequests="100" maxProcessors="75"
minProcessors="5" port="8009"
protocolHandlerClassName="org.apache.jk.server.JkCoyoteHandler"
proxyPort="0" redirectPort="8443" scheme="http" secure="false"
tcpNoDelay="true" useURIValidationHack="false">
>       <Factory
className="org.apache.catalina.net.DefaultServerSocketFactory"/>
>     </Connector>
>     <Engine className="org.apache.catalina.core.StandardEngine" debug="0"
defaultHost="localhost"
mapperClass="org.apache.catalina.core.StandardEngineMapper"
name="Standalone">
>       <Host className="org.apache.catalina.core.StandardHost"
appBase="webapps" autoDeploy="true"
configClass="org.apache.catalina.startup.ContextConfig"
contextClass="org.apache.catalina.core.StandardContext" debug="0"
deployXML="true"
errorReportValveClass="org.apache.catalina.valves.ErrorReportValve"
liveDeploy="true" mapperClass="org.apache.catalina.core.StandardHostMapper"
name="localhost" unpackWARs="true">
>         <Context className="org.apache.catalina.core.StandardContext"
cachingAllowed="true"
charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true"
crossContext="false" debug="0" displayName="Tomcat Administration
Application" docBase="../server/webapps/admin"
mapperClass="org.apache.catalina.core.StandardContextMapper" path="/admin"
privileged="true" reloadable="false" swallowOutput="false" useNaming="true"
wrapperClass="org.apache.catalina.core.StandardWrapper">
>           <Logger className="org.apache.catalina.logger.FileLogger"
debug="0" directory="logs" prefix="localhost_admin_log." suffix=".txt"
timestamp="true" verbosity="1"/>
>         </Context>
>         <Context className="org.apache.catalina.core.StandardContext"
cachingAllowed="true"
charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true"
crossContext="false" debug="0" displayName="Webdav Content Management"
docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\webdav"
mapperClass="org.apache.catalina.core.StandardContextMapper" path="/webdav"
privileged="false" reloadable="false" swallowOutput="false" useNaming="true"
wrapperClass="org.apache.catalina.core.StandardWrapper">
>         </Context>
>         <Context className="org.apache.catalina.core.StandardContext"
cachingAllowed="true"
charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true"
crossContext="true" debug="0" displayName="paralegal"
docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\para"
mapperClass="org.apache.catalina.core.StandardContextMapper" path="/para"
privileged="false" reloadable="false" swallowOutput="false" useNaming="true"
wrapperClass="org.apache.catalina.core.StandardWrapper">
>           <Resource auth="Container" description="DB Connection"
name="jdbc/mydb" scope="Shareable" type="javax.sql.DataSource"/>
>         </Context>
>         <Context className="org.apache.catalina.core.StandardContext"
cachingAllowed="true"
charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true"
crossContext="true" debug="0" displayName="Tomcat Examples"
docBase="examples"
mapperClass="org.apache.catalina.core.StandardContextMapper"
path="/examples" privileged="false" reloadable="true" swallowOutput="false"
useNaming="true" wrapperClass="org.apache.catalina.core.StandardWrapper">
>           <Logger className="org.apache.catalina.logger.FileLogger"
debug="0" directory="logs" prefix="localhost_examples_log." suffix=".txt"
timestamp="true" verbosity="1"/>
>           <Parameter name="context.param.name" override="false"
value="context.param.value"/>
>           <Ejb home="com.wombat.empl.EmployeeRecordHome"
name="ejb/EmplRecord" remote="com.wombat.empl.EmployeeRecord"
type="Entity"/>
>           <Ejb description="Example EJB Reference"
home="com.mycompany.mypackage.AccountHome" name="ejb/Account"
remote="com.mycompany.mypackage.Account" type="Entity"/>
>           <Environment name="maxExemptions" override="true"
type="java.lang.Integer" value="15"/>
>           <Environment name="foo/name4" override="true"
type="java.lang.Integer" value="10"/>
>           <Environment name="minExemptions" override="true"
type="java.lang.Integer" value="1"/>
>           <Environment name="foo/bar/name2" override="true"
type="java.lang.Boolean" value="true"/>
>           <Environment name="name3" override="true"
type="java.lang.Integer" value="1"/>
>           <Environment name="foo/name1" override="true"
type="java.lang.String" value="value1"/>
>           <LocalEjb description="Example Local EJB Reference"
home="com.mycompany.mypackage.ProcessOrderHome"
local="com.mycompany.mypackage.ProcessOrder" name="ejb/ProcessOrder"
type="Session"/>
>           <Resource auth="SERVLET" name="jdbc/EmployeeAppDb"
scope="Shareable" type="javax.sql.DataSource"/>
>           <Resource auth="Container" name="mail/Session" scope="Shareable"
type="javax.mail.Session"/>
>           <ResourceParams name="jdbc/EmployeeAppDb">
>             <parameter>
>               <name>password</name>
>               <value></value>
>             </parameter>
>             <parameter>
>               <name>url</name>
>               <value>jdbc:HypersonicSQL:database</value>
>             </parameter>
>             <parameter>
>               <name>driverClassName</name>
>               <value>org.hsql.jdbcDriver</value>
>             </parameter>
>             <parameter>
>               <name>username</name>
>               <value>sa</value>
>             </parameter>
>           </ResourceParams>
>           <ResourceParams name="mail/Session">
>             <parameter>
>               <name>mail.smtp.host</name>
>               <value>localhost</value>
>             </parameter>
>           </ResourceParams>
>           <ResourceLink global="simpleValue" name="linkToGlobalResource"
type="java.lang.Integer"/>
>         </Context>
>         <Context className="org.apache.catalina.core.StandardContext"
cachingAllowed="true"
charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true"
crossContext="false" debug="0"
docBase="c:/local/tomcat/jakarta-tomcat-4.1.24/webapps/application-examples"
mapperClass="org.apache.catalina.core.StandardContextMapper"
path="/application-examples" privileged="false" reloadable="false"
swallowOutput="false" useNaming="true"
wrapperClass="org.apache.catalina.core.StandardWrapper">
>         </Context>
>         <Context className="org.apache.catalina.core.StandardContext"
cachingAllowed="true"
charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true"
crossContext="false" debug="0" displayName="Tomcat Documentation"
docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\tomcat-docs"
mapperClass="org.apache.catalina.core.StandardContextMapper"
path="/tomcat-docs" privileged="false" reloadable="false"
swallowOutput="false" useNaming="true"
wrapperClass="org.apache.catalina.core.StandardWrapper">
>         </Context>
>         <Context className="org.apache.catalina.core.StandardContext"
cachingAllowed="true"
charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true"
crossContext="true" debug="0" displayName="Shyly"
docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\shyly"
mapperClass="org.apache.catalina.core.StandardContextMapper" path="/shyly"
privileged="false" reloadable="true" swallowOutput="false" useNaming="true"
wrapperClass="org.apache.catalina.core.StandardWrapper">
>         </Context>
>         <Context className="org.apache.catalina.core.StandardContext"
cachingAllowed="true"
charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true"
crossContext="false" debug="0" displayName="Tomcat Manager Application"
docBase="../server/webapps/manager"
mapperClass="org.apache.catalina.core.StandardContextMapper" path="/manager"
privileged="true" reloadable="false" swallowOutput="false" useNaming="true"
wrapperClass="org.apache.catalina.core.StandardWrapper">
>           <ResourceLink global="UserDatabase" name="users"
type="org.apache.catalina.UserDatabase"/>
>         </Context>
>         <Context className="org.apache.catalina.core.StandardContext"
cachingAllowed="true"
charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true"
crossContext="false" debug="0" displayName="Welcome to Tomcat"
docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\ROOT"
mapperClass="org.apache.catalina.core.StandardContextMapper" path=""
privileged="false" reloadable="false" swallowOutput="false" useNaming="true"
wrapperClass="org.apache.catalina.core.StandardWrapper">
>         </Context>
>         <Logger className="org.apache.catalina.logger.FileLogger"
debug="9" directory="logs" prefix="localhost_log." suffix=".txt"
timestamp="true" verbosity="4"/>
>       </Host>
>       <Logger className="org.apache.catalina.logger.FileLogger" debug="0"
directory="logs" prefix="catalina_log." suffix=".txt" timestamp="true"
verbosity="1"/>
>       <Realm className="org.apache.catalina.realm.UserDatabaseRealm"
debug="0" resourceName="UserDatabase" validate="true"/>
>     </Engine>
>   </Service>
> </Server>
>
> At 02:12 PM 4/17/2003 -0400, you wrote:
>
> >Hi Shyly,
> >
> >It looks like you have everything right.
> >
> >You are not missing an environment variable, assuming you meant
> >%CATALINA_HOME% and not %TOMCAT% or %CATALINA_HOM% below.
> >
> >Do you have the context entry in server.xml inside <host>?
> >
> >Also do you have the <resource-ref> in the right place in the web.xml
file?
> >Those entries have to be in the right order.
> >
> >It has to be after </error-page> and before <security-constraint>.
> >
> >Can you post (or send directly) you entire server.xml and web.xml files
> >after sanitizing them?
> >
> >Rick
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tomcat-user-help@jakarta.apache.org
>


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-user-help@jakarta.apache.org


Re: Help with access control (.htaccess)

Posted by Owen Boyle <ob...@bourse.ch>.
"J.D. Bronson" wrote:
> 
> I looked at the httpd.conf file over and over and found yet ANOTHER entry
> for this specific directory with different directives. Obviously apache
> read this AFTER my correct directives. Grrrr...

Arrgh... Equally specific directives are applied in the order in which
they are listed in the file - so later overrides earlier. "grep" is
useful for checking out multiple directives if you suspect you have a
conflict (use the "-i" switch to ignore case).

Glad you got it working!

Rgds,

owen Boyle.

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: W3C <-> Apache

Posted by Rodent of Unusual Size <Ke...@Golux.Com>.
Ben Laurie wrote:
> 
> Henrik Frystyk Nielsen wrote:
> >
> > Since when has W3C been in any market? If you read our contract then we
> > explicitly state that we don't compete with anyone.
> 
> Oh come on! W3C is a consortium. Of who? Of Web developers. W3C may
> claim to not compete, but its members can't make that claim. I love this
> idea that you can state in your "contract" (who is this a contract
> with?) that you don't compete, and somehow, magically, its true.

Come, come - let's try to work together towards a means of
collaboration and extending mutual trust and respect.  Enough
self-righteousness - on both sides.

Henryk, I think several of the Apache developers would like to
be involved in HTTP-NG design/evolution.  However, they probably
can none of them dedicate 50% of their time.  Will W3C cut them some
slack?  Is 50% of someone's time who can barely spell HTTP worth
more to W3C than 5% of Roy's, Marc's, Jim's, Dean's, ... time?

#ken	P-)}

Re: W3C <-> Apache

Posted by Ben Laurie <be...@algroup.co.uk>.
Henrik Frystyk Nielsen wrote:
> >Now, in case people think I'm completely serious about this: of course,
> >I'm not. I'm more reasonable than W3C. OTOH, the temptation of 50k a hit
> >could make me much less reasonable. But seriously: in what way is what I
> >have said different to W3C's approach? If anything it should carry more
> >weight. We represent more market than they do, after all.
> 
> Since when has W3C been in any market? If you read our contract then we
> explicitly state that we don't compete with anyone.

Oh come on! W3C is a consortium. Of who? Of Web developers. W3C may
claim to not compete, but its members can't make that claim. I love this
idea that you can state in your "contract" (who is this a contract
with?) that you don't compete, and somehow, magically, its true.

Cheers,

Ben [who only ever tells the truth. It says so in his "contract"].

-- 
Ben Laurie            |Phone: +44 (181) 735 0686|Apache Group member
Freelance Consultant  |Fax:   +44 (181) 735 0689|http://www.apache.org
and Technical Director|Email: ben@algroup.co.uk |Apache-SSL author
A.L. Digital Ltd,     |http://www.algroup.co.uk/Apache-SSL
London, England.      |"Apache: TDG" http://www.ora.com/catalog/apache

Re: svn diff on renamed files

Posted by kf...@collab.net.
Chris Hecker <ch...@d6.com> writes:
> > 1.  Fixing diff so it tracks renames on the file, so -r2:7 new.txt
> > works like you'd expect (meaning you didn't have to know the file
> > was renamed 6 times before you joined the company 18000 versions
> > ago). <snip>

Yup.  This appears to be the same as newly-filed issue #1375; I had
thought there was an older issue about essentially the same thing, but
now I can't seem to find it.  (Maybe someone else remembers which one?)

Suspect that Mike Pilato may have more to say about this; he's been
working on some fs schema changes that will help us deal with copies
and renames better.  A node will know what path it was created at, and
keep track of its successors; at the very worst, this would allow us
to go back to the origin and trace forward, though that seems clumsy
and Mike may have a better plan in mind.

> > 2.  Making a symbol for URL-to-the-current-wc-directory (like the
> > symbols for HEAD, PREV, etc.), so I can just say REP_URL/foo.txt (or
> > whatever) to specify the full path.  This would just get the Url:
> > from info and use it, nothing fancy, just a shorthand.  <snip>

If (1) is fixed, what use did you have in mind for (2)?

> > 3.  If there's an @ symbol on a non-full-url'd file name, still look
> > for it at that revision number in the repository.  In other words,
> > old.txt is not in my wc, but old.txt@3 makes total sense
> > contextually, even though there's no old.txt in my wc.  This would
> > save a lot of typing.

Or: if we have (2), is (3) really necessary :-) ?

(I'm just trying to boil this down to the minimal feature set that
solves all the problems.  Don't have an opinion as to whether these
improvements should be pre- or post-1.0 yet.)

-K


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: [users@httpd] Multiple URLs / One Site

Posted by tr...@clayst.com.
On 27 Apr 2004 Joshua Slive wrote:

> Because then all relative URL references would be wrong.  If you access
> "/dir", and then click on a realtive link for "file", your browser will
> take you to "/file".  If, on the other hand, you had accessed "/dir/",
> your browser would have taken you to "/dir/file".  This this resolution is
> handled by the browser, not the server, the server must inform the browser
> of the true URL with the trailing slash included.

Ah, OK, I see.  Thanks.

> To do it with a real browser, yes you'd need to change DNS.  But you don't
> need to use a real browser.  You can simply telnet to your server's IP on
> port 80 and then specify whatever you want in the Host: header, as in
> 
> telnet yourhost.example.com 80
> GET / HTTP/1.1
> Host: whatever.I.want

After I responded I was doing some reading, I think in the PHP docs, 
that made me wonder if faking the Host: header was the mechanism.  
Thanks for the explanation.

--
Tom




---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Multiple URLs / One Site

Posted by Joshua Slive <jo...@slive.ca>.
On Tue, 27 Apr 2004 trlists@clayst.com wrote:

> On 27 Apr 2004 Joshua Slive wrote:
>
> > Yes.  The most common case is a trailing-slash redirect: When someone
> > requests a directory without the trailing slash, apache must redirect them
> > to the same URL with a trailing slash added.
>
> Got it, thanks.  Now why doesn't it just add the slash?  Maybe I don't
> want to know :-).

Because then all relative URL references would be wrong.  If you access
"/dir", and then click on a realtive link for "file", your browser will
take you to "/file".  If, on the other hand, you had accessed "/dir/",
your browser would have taken you to "/dir/file".  This this resolution is
handled by the browser, not the server, the server must inform the browser
of the true URL with the trailing slash included.

> > Also, the server name is used in server-generated error documents and
> > things like that.
>
> Good point.  I imagine it must be used in the logs too, though I
> haven't looked at the logging setup yet.

The logs can be configured either to user the ServerName or the
browser-supplied hostname.

> > Nothing major.  You should just be sure not to rely on the SERVER_NAME
> > environment variable, since an attacker could specify whatever he wants
> > there.
>
> I just checked and I'm not using this.  I'm trying to understand the
> mechanism though -- does an attacker have to map the server name they
> want to use to my IP then reference that as a URL, or can they do it
> without a DNS hack?

To do it with a real browser, yes you'd need to change DNS.  But you don't
need to use a real browser.  You can simply telnet to your server's IP on
port 80 and then specify whatever you want in the Host: header, as in

telnet yourhost.example.com 80
GET / HTTP/1.1
Host: whatever.I.want

Then apache will treat whatever.I.want as the server name.

Joshua.

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Multiple URLs / One Site

Posted by tr...@clayst.com.
On 27 Apr 2004 Joshua Slive wrote:

> Yes.  The most common case is a trailing-slash redirect: When someone
> requests a directory without the trailing slash, apache must redirect them
> to the same URL with a trailing slash added.

Got it, thanks.  Now why doesn't it just add the slash?  Maybe I don't 
want to know :-).

> Also, the server name is used in server-generated error documents and
> things like that.

Good point.  I imagine it must be used in the logs too, though I 
haven't looked at the logging setup yet.

> Nothing major.  You should just be sure not to rely on the SERVER_NAME
> environment variable, since an attacker could specify whatever he wants
> there.

I just checked and I'm not using this.  I'm trying to understand the 
mechanism though -- does an attacker have to map the server name they 
want to use to my IP then reference that as a URL, or can they do it 
without a DNS hack?

Thanks,

--
Tom




---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Multiple URLs / One Site

Posted by Joshua Slive <jo...@slive.ca>.
On Tue, 27 Apr 2004 trlists@clayst.com wrote:
> > There is one small caveat.  Occasionally apache needs to create
> > self-referential URLs.  For example, it sometimes needs to construct a
> > redirect pointing to itself.
>
> Out of curiosity, does it only do that when I do a redirect without
> specifying a full URL?  Or are there other conditions?

Yes.  The most common case is a trailing-slash redirect: When someone
requests a directory without the trailing slash, apache must redirect them
to the same URL with a trailing slash added.

Also, the server name is used in server-generated error documents and
things like that.

> > For that purpose, it can either use the configured ServerName, or it
> > can use whatever the client specified as the name it is looking at.
> > Since you want the latter, you'll want to set UseCanonicalName off
> > inside the VirtualHost section.
>
> I can do that, I read the docs on UseCanonicalName and they make sense.
> Are there any security implications to setting it Off?  I can't think
> of any, but wanted to double-check.

Nothing major.  You should just be sure not to rely on the SERVER_NAME
environment variable, since an attacker could specify whatever he wants
there.

> If I do it this way should I simply remove ServerName entirely?  Will
> it be used for anything at all with UseCanonicalName off?

I don't know, but you should just leave it in.  It won't hurt.

Joshua.

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: problem with the CLI

Posted by Upayavira <uv...@upaya.co.uk>.
> Upayavira,
> 
> I rebuild cocoon as you suggested.
> This indeed fixes my problem.
> I can now enjoy the full power of cocoon without misterious stack
> traces :-) Thanks a lot for your great help !

You're welcome.

And if you're planning to keep using the CLI, keep an eye on the CVS - I've got some 
major changes that I've been working on, but I'm not going to put them into CVS until 
after 2.1 has been released. There's a lot of refactoring, which would need more 
testing. Once this refactor is complete, it'll be much easier to support and extend.

Glad I was able to help.

Regards, Upayavira


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@cocoon.apache.org
For additional commands, e-mail: users-help@cocoon.apache.org


Re: problem with the CLI

Posted by Sylvain FORET <fo...@rsbs.anu.edu.au>.
>
>
>Sylvain,
>
>  
>
>>Thanks a lot for your anwser.
>>I check the URIs through a servlet with the build-in jetty, and they
>>work fine, without generating any Exception. I dont think that the
>>problem comes from there, since the CLI does generate the URI I'am
>>asking properly (but with these exceptions). The problem probably
>>stems from another file that cocoon cannot read ... but I really can't
>>figure out which one (I checked the logger config file, the
>>cocoon.xconf, and other resources declared in my CLI xconf file ...
>>they seem to be properly declared and found).
>>    
>>
>
>  
>
>>Any hint ?
>>    
>>
>
>I've just spotted in your original message that you gave the output of your run. It 
>seems that the problem is in the Deli block. Do you use Deli? If not, I'd suggest 
>rebuilding Cocoon without it and see how that goes.
>
>To do that: copy blocks.properties to local.blocks.properties and, in 
>local.blocks.properties, remove the hash (#) before the exclude 
>exclude.block.deli=true line. Then do:
> build clean
>and then
>  build
>
>Let me know how that goes.
>
>Regards, Upayavira
>

Upayavira,

I rebuild cocoon as you suggested.
This indeed fixes my problem.
I can now enjoy the full power of cocoon without misterious stack traces :-)
Thanks a lot for your great help !

Warm regards,

Sylvain


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@cocoon.apache.org
For additional commands, e-mail: users-help@cocoon.apache.org


Re: Looking for a way to assign variables to threads

Posted by Jordi Salvat i Alabart <js...@atg.com>.
Oh, I see. I somehow understood that Shawn's case was with variables 
which are also defined globally. These wouldn't work. Sorry, my confusion.

-- 
Salut,

Jordi.

En/na mstover1@apache.org ha escrit:
> More or less - essentially what I meant.   In any case, the User 
> Parameters component explicitly assigns values to specific threads.
> 
> -Mike
> 
> On 6 Feb 2004 at 1:08, Jordi Salvat i Alabart wrote:
> 
> 
>>But doesn't the compiler perform replacement of global vars before the 
>>test starts? That's what I understood from the code... but it was pretty 
>>obscure, so I may be wrong.
>>
>>-- 
>>Salut,
>>
>>Jordi.
>>
>>En/na mstover1@apache.org ha escrit:
>>
>>>All variables are stored by thread.  Even global variables are 
>>>duplicated by value (not reference) to each thread.
>>>
>>>-Mike
>>>
>>>On 5 Feb 2004 at 9:57, Shawn Elliott wrote:
>>>
>>>
>>>
>>>>All,
>>>>
>>>>Is it possible to assign variables to threads?  
>>>>
>>>>Currently I have a variable defined in a 'User Parameter' 
>>>
>>>container that
>>>
>>>
>>>>is out side of my 4 thread groups.  Inside 2 of my thread 
> 
> groups I 
> 
>>>have
>>>
>>>
>>>>a Regular Expression storing a result into this variable.  I think 
>>>
>>>that
>>>
>>>
>>>>this variable is being used by all of my threads.   How can I 
>>>
>>>define a
>>>
>>>
>>>>variable not just to a thread group but to individual threads?
>>>>
>>>>I need each thread to store its information with out it getting
>>>>overwritten by another one.
>>>>
>>>>Thank you,
>>>>Shawn
>>>>
>>>>
>>>>
>>>
>>>
>>>
>>>
>>>
>>>--
>>>Michael Stover
>>>mstover1@apache.org
>>>Yahoo IM: mstover_ya
>>>ICQ: 152975688
>>>AIM: mstover777
>>>
>>>---------------------------------------------------------------------
>>>To unsubscribe, e-mail: jmeter-user-
> 
> unsubscribe@jakarta.apache.org
> 
>>>For additional commands, e-mail: jmeter-user-
> 
> help@jakarta.apache.org
> 
>>>
>>
>>---------------------------------------------------------------------
>>To unsubscribe, e-mail: jmeter-user-
> 
> unsubscribe@jakarta.apache.org
> 
>>For additional commands, e-mail: jmeter-user-
> 
> help@jakarta.apache.org
> 
> 
> 
> 
> 
> --
> Michael Stover
> mstover1@apache.org
> Yahoo IM: mstover_ya
> ICQ: 152975688
> AIM: mstover777
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> 
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: application is NULL

Posted by Peter Royal <pr...@apache.org>.
On Thursday 20 June 2002 07:11 pm, bp@dankseeds.org wrote:
> > good work! you figured it out before i had a chance to look at it. If you
> > want to send a patch, that would be superb. It might be good to make the
> > sleep time configurable.
> > -pete
>
> sure thing. when you say "patch", do you mean a diff of the source, or do
> you want me to check out a copy and update the cvs ?

A diff of the source.

One way is to check out a copy of the CVS, make your change and then do
cvs diff -u

to generate a patch that can be put in bugzilla or sent to the mailing list
-pete

-- 
peter royal -> proyal@apache.org

--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: application is NULL

Posted by bp...@dankseeds.org.
> eh, this is avalon-apps-dev, it should do, but I'll crosspost to 
> avalon-phoenix-dev.

yeah that's what i meant, avalon-phoenix-dev. 
 
> good work! you figured it out before i had a chance to look at it. If you 
> want to send a patch, that would be superb. It might be good to make the 
> sleep time configurable.
> -pete

sure thing. when you say "patch", do you mean a diff of the source, or do you want 
me to check out a copy and update the cvs ? 

i'll be out of town the rest of the week so i'll fix this monday unless someone has 
already done it. 

							BP



--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: application is NULL

Posted by Peter Royal <pr...@apache.org>.
On Wednesday 19 June 2002 03:53 pm, bp@dankseeds.org wrote:
> > I don't know reason of NPE at this time.
> > I try PhoenixServlet in current cvs. It is not works. Installtion is very
> > difficult.
> > Current SingleAppEmbeddor and PhoenixServlet are broken.
>
> Not any more ;) check this:
>
> In the PhoenixServlet init() the Thread that you assigned to the embeddor
> doesn't have a chance to get the Application setup properly before it is
> used. Thus i just dropped in a 'Thread.sleep(1000)' and gave it a second to
> get setup properly. This seems to be a logical fix. Any comments ot the
> contrary ? Im guessing this should go on the avalon-dev list also since it
> involves internal code.

eh, this is avalon-apps-dev, it should do, but I'll crosspost to 
avalon-phoenix-dev.

good work! you figured it out before i had a chance to look at it. If you 
want to send a patch, that would be superb. It might be good to make the 
sleep time configurable.
-pete

-- 
peter royal -> proyal@apache.org

--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: application is NULL

Posted by Peter Royal <pr...@apache.org>.
On Wednesday 19 June 2002 03:53 pm, bp@dankseeds.org wrote:
> > I don't know reason of NPE at this time.
> > I try PhoenixServlet in current cvs. It is not works. Installtion is very
> > difficult.
> > Current SingleAppEmbeddor and PhoenixServlet are broken.
>
> Not any more ;) check this:
>
> In the PhoenixServlet init() the Thread that you assigned to the embeddor
> doesn't have a chance to get the Application setup properly before it is
> used. Thus i just dropped in a 'Thread.sleep(1000)' and gave it a second to
> get setup properly. This seems to be a logical fix. Any comments ot the
> contrary ? Im guessing this should go on the avalon-dev list also since it
> involves internal code.

eh, this is avalon-apps-dev, it should do, but I'll crosspost to 
avalon-phoenix-dev.

good work! you figured it out before i had a chance to look at it. If you 
want to send a patch, that would be superb. It might be good to make the 
sleep time configurable.
-pete

-- 
peter royal -> proyal@apache.org

--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: application is NULL

Posted by bp...@dankseeds.org.
> I don't know reason of NPE at this time.
> I try PhoenixServlet in current cvs. It is not works. Installtion is very
> difficult.
> Current SingleAppEmbeddor and PhoenixServlet are broken.

Not any more ;) check this:

In the PhoenixServlet init() the Thread that you assigned to the embeddor doesn't 
have a chance to get the Application setup properly before it is used. Thus i just 
dropped in a 'Thread.sleep(1000)' and gave it a second to get setup properly. This 
seems to be a logical fix. Any comments ot the contrary ? Im guessing this should 
go on the avalon-dev list also since it involves internal code. 

try
{
    m_embeddor = new SingleAppEmbeddor();
    if ( m_embeddor instanceof Parameterizable )
    {
        ((Parameterizable)m_embeddor).parameterize( 
m_parameters );
    }
    m_embeddor.initialize();
    new Thread( this ).start();
    // Added a pause so the Application can start up properly. 
    Thread.sleep(1000);
}
catch ( final Throwable throwable )
{
	<SNIP>
}

			BP



--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: application is NULL

Posted by Eung-ju Park <co...@hotmail.com>.
I don't know reason of NPE at this time.
I try PhoenixServlet in current cvs. It is not works. Installtion is very
difficult.
Current SingleAppEmbeddor and PhoenixServlet are broken.
Please do not use PhoenixServlet for production web application. Try to
fortress, merlin or your own container for component container.
I will remove SingleAppEmbeddor and PhoenixServlet from phoenix.
Sorry for irresponsibility.

----- Original Message -----
From: <bp...@dankseeds.org>
To: "Avalon Applications Developers List"
<av...@jakarta.apache.org>
Sent: Wednesday, June 19, 2002 5:17 AM
Subject: Re: application is NULL


> > Can you send a copy of your logfiles w/everything running in DEBUG?
> > -pete
>
> ok here goes:
>
> just for background once again - Im using the PhoenixServlet which in turn
uses the
> SingleAppEmbeddor class. The PhoenixServlet does the following in init():
>
> --- new SingleAppEmbeddor() m_embeddor
>
> --- m_embeddor.parameterize(...)
> ive included all the appropriate parms in the Initialization Parms for the
servlet,
> which get passed on to m_embeddor.parameterize(...). The sar file gets
picked up
> and the components get configure(...)d properly, as i see the info in the
logs. i use
> the following parameters:
> ~~~
> log-destination = PhoenixServlet
> log-priority = DEBUG
> application-name = CoreWebApp
> application-location = /WEB-INF/apps/CoreWebApp.sar
> ~~~
>
> --- m_embeddor.intialize(...)
>
> --- new Thread( this ).start()
> This calls m_embeddor.execute() which in turn calls
> m_embeddor.deployDefaultApplications(). This should setup the single
application.
>
> --- getServletContext().setAttribute(Embeddor.ROLE,m_embeddor)
> Then PhoenixServlet places the SingleAppEmbeddor in the ServletContext
from
> which i access it in another Servlet.
>
> Ok the SingleAppEmbeddor comes out of the ServletContext fine and the cast
to
> SingleAppEmbeddor is ok also. BUUUT when i try to call ANY method on the
> SingleAppEmbeddor which access'  the inner m_application reference i get
> NullPointerExceptions. i.e.
>
> m_embeddor.lookup(...)
> m_embeddor.hasComponent(...)
> m_embeddor.list()
>
> The following are my log files.
> ******************************************************
> /WEB-INF/apps/CoreWebApp/logs/default.log*
> *******************************************************
> 1024430557421 [INFO   ] (CommandRepository): Got config
> 1024430557437 [INFO   ] (CommandRepository): Added 'DefaultHttpForward.do'
> Command to the repository
> 1024430557437 [INFO   ] (CommandRepository): Added
'DefaultHttpRedirect.do'
> Command to the repository
> 1024430557453 [INFO   ] (UserRepository): Got config. Loading users.
> 1024430557468 [INFO   ] (UserRepository): Added 'bpurvis' to the User
Repository
> 1024430557500 [INFO   ] (CommandDispatcher): Got config
> *******************************************************
>
> *********************************
> /WEB-INF/logs/phoenix.log*
> **********************************
> 1024430002203 [INFO   ] (Phoenix): Logger started
> 1024430002453 [INFO   ] (Phoenix.deployer): Installing Sar located at
> file:/G:/Workspace/CVSROOT/CoreWebApp/webApplication/WEB-
> INF/apps/CoreWebApp.sar.
> 1024430002453 [WARN   ] (Phoenix.deployer): The file SAR-INF/config.xml
can
> not be extracted from the Sar
> "file:/G:/Workspace/CVSROOT/CoreWebApp/webApplication/WEB-
> INF/apps/CoreWebApp.sar" into directory
> G:\Workspace\CVSROOT\CoreWebApp\webApplication\WEB-
> INF\apps\CoreWebApp because there is a file in the way.
> 1024430002453 [WARN   ] (Phoenix.deployer): The file
SAR-INF/environment.xml
> can not be extracted from the Sar
> "file:/G:/Workspace/CVSROOT/CoreWebApp/webApplication/WEB-
> INF/apps/CoreWebApp.sar" into directory
> G:\Workspace\CVSROOT\CoreWebApp\webApplication\WEB-
> INF\apps\CoreWebApp because there is a file in the way.
> 1024430002703 [WARN   ] (Phoenix.deployer): Warning: BlockInfo for class
> com.bpurvis.webapp.service.DefaultCommandDispatcher redundently specifies
role
> name "com.bpurvis.webapp.services.CommandRepository" in dependency when it
> is identical to the name of service. It is recomended that the <role/>
section be
> elided.
> 1024430002703 [WARN   ] (Phoenix.deployer): Warning: BlockInfo for class
> com.bpurvis.webapp.service.DefaultCommandDispatcher redundently specifies
role
> name "com.bpurvis.webapp.services.UserRepository" in dependency when it is
> identical to the name of service. It is recomended that the <role/>
section be elided.
> 1024430002703 [WARN   ] (Phoenix.deployer): Warning: BlockInfo for class
> com.bpurvis.webapp.service.DefaultCommandDispatcher redundently specifies
role
> name "com.bpurvis.webapp.services.AuthorizationManager" in dependency when
it
> is identical to the name of service. It is recomended that the <role/>
section be
> elided.
> 1024430002859 [INFO   ] (Phoenix.deployer): Verifying that the name
specified for
> Blocks and BlockListeners are valid.
> 1024430002859 [INFO   ] (Phoenix.deployer): Verifying that the name
specified for
> Blocks and BlockListeners are unique.
> 1024430002859 [INFO   ] (Phoenix.deployer): Verifying that the specified
> Dependencies are valid according to BlockInfo.
> 1024430002859 [INFO   ] (Phoenix.deployer): Verifying that the
dependencies of
> Blocks are valid with respect to other Blocks.
> 1024430002859 [INFO   ] (Phoenix.deployer): Verifying that there are no
circular
> dependencies between Blocks.
> 1024430002859 [INFO   ] (Phoenix.deployer): Verifying that the specified
Blocks
> have valid types.
> 1024430003000 [INFO   ] (Phoenix.deployer): Verifying that the specified
> BlockListeners have valid types.
> 1024430003140 [INFO   ] (Phoenix.kernel.CoreWebApp): 4 Blocks to process
for
> phase "startup". Order of processing = [AuthorizationManager,
> CommandRepository, UserRepository, CommandDispatcher].
> 1024430428468 [INFO   ] (Phoenix.kernel.CoreWebApp): 4 Blocks to process
for
> phase "shutdown". Order of processing = [CommandDispatcher,
> AuthorizationManager, CommandRepository, UserRepository].
>
****************************************************************************
**************
>
> ok those are the only 2 logs i have setup at the moment. If you wish to
see
> config.xml, assembly.xml, or environment.xml please let me know.
>
> Thanks !
>
> BP
>
>
>
>
>
>
>
>
> --
> To unsubscribe, e-mail:
<ma...@jakarta.apache.org>
> For additional commands, e-mail:
<ma...@jakarta.apache.org>
>
>

--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: Apache::Session::MySQL, light/heavy proxy, wedging

Posted by Perrin Harkins <ph...@gmail.com>.
On 1/30/07, Todd Finney <tf...@boygenius.com> wrote:
> The sessions are modified on every request, to set a last_access time, and
> they're modified on login to set an authentication token.  I can't think of
> circumstances under which two different requests would attempt to modify a
> given session at the same time.

The biggest risks are things like multiple windows, AJAX calls, or
some kind of auth handler that needs to load the session for every
image request.

> As much as I'd really like to understand what's actually happening here,
> I'll switch to A::S::Lock::Null if you think that's the best bet.

For many people it is, but I can't say for certain if it is for you.
There may be a simpler solution.  See below.

> I don't
> see an example in the Apache::Session docs for switching the locking class,
> though - may I have a pointer?

You can either use Apache::Session::Flex or make your own
Apache::Session::MySQL.  Open up Apache::Session::MySQL and look at
the code.  It's nothing more than a config file.  If you copy it,
change the name (and package), and change the name of the locking
class, that will work.

As for what's going wrong, my guess is that it has to do with the
internal redirects that happen when you access / as opposed to
/index.phtml.  You are trying to open the session in the
HeaderParserHandler phase, so it's going to open a session, then do an
internal redirect, and try to open the same session again, effectively
deadlocking.

But why does that $r->pnotes call make all the difference?  Because
pnotes() is special, as described here:
http://perl.apache.org/docs/2.0/api/Apache2/RequestUtil.html#C_pnotes_

That's a 2.0 doc, but it applies to 1.0 as well: pnotes() increases
the reference count to $session rather than copying it, so it doesn't
get destroyed until after the internal redirect has completed and
pnotes gets torn down.  If you use a temporary variable to hold the
_session_id key, this will not happen and that may fix your problem.

I could rant about all the things that bother me about
Apache::Session, but the bottom line is what works for you.  Try the
suggestions here and good luck.

- Perrin

Re: documentation

Posted by Fotis Jannidis <Fo...@lrz.uni-muenchen.de>.
> How much time do you need? We'd like to get a JAR out shortly (next day or 
> so - incidentally, Fotis, I'm assuming that once I do up a JAR file using 
> build.sh or build.bat, that I just FTP it someplace? Never really thought 
> about it...)

Actually the last release came as a zip and as a tar file containing next to the jar file also 
the documentation, the sources, the addional files needed to build Fop etc. 
Then you do log into locus.apache.org via ssh and ftp the files from your local account 
and put them into xml.apache.org/dist/fop. Sounds complicated, but I asked the last 
time and Pier assured me this is the way to do it. 

Fotis 


Re: documentation

Posted by Eric SCHAEFFER <es...@posterconseil.com>.
I think you understand now why, after a short thought, I prefered to commit my
work on the main tree rather on the new release tree.

Eric.

----- Original Message -----
From: Stefano Mazzocchi <st...@apache.org>
To: <fo...@xml.apache.org>
Sent: Tuesday, May 23, 2000 11:55 AM
Subject: Re: documentation


> Arved Sandstrom wrote:
> >
> > At 04:47 PM 5/22/00 +0200, Eric SCHAEFFER wrote:
> > >
> > >So, where do I commit ?
> > >
> > >Eric.
> > >
> > No harm done if you commit to 0-13-0. I buy Fotis' argument that if it's
> > solid, useful code that improves the interim release that we include it
there.
>
> Guys,
>
> you have been caught into the "stuff it there" syndrome.
>
> In case you didn't notice, open source works not because software is
> perfect but because software will always have bugs.
>
> Strangely enough, given a good underlying idea, the more problems, the
> more people come in, the faster the project advances.
>
> If you keep on waiting to do the release because of this "important
> feature that holds on forever" you fail to release early and often... in
> short, you fail to open the door for possible contributors.
>
> In Cocoon there are two secrets:
>
> 1) use higher version numbers rather than lower ones
> 2) never wait more than two months or less than two weeks for a release.
>
> A month frequency for releases is ideal: people usually take that amount
> of time to discover bugs that were already fixed. Also new people come
> in and if their own contributions don't get released soon, they walk
> away and don't contribute back.
>
> So, to sum up, I don't care _where_ you commit new functionality, as
> long as this doesn't hold back the release and breaks the month-cycle.
>
> --
> Stefano Mazzocchi      One must still have chaos in oneself to be
>                           able to give birth to a dancing star.
> <st...@apache.org>                             Friedrich Nietzsche
> --------------------------------------------------------------------
>  Missed us in Orlando? Make it up with ApacheCON Europe in London!
> ------------------------- http://ApacheCon.Com ---------------------
>
>


Re: documentation

Posted by Stefano Mazzocchi <st...@apache.org>.
Arved Sandstrom wrote:
> 
> At 04:47 PM 5/22/00 +0200, Eric SCHAEFFER wrote:
> >
> >So, where do I commit ?
> >
> >Eric.
> >
> No harm done if you commit to 0-13-0. I buy Fotis' argument that if it's
> solid, useful code that improves the interim release that we include it there.

Guys,

you have been caught into the "stuff it there" syndrome.

In case you didn't notice, open source works not because software is
perfect but because software will always have bugs.

Strangely enough, given a good underlying idea, the more problems, the
more people come in, the faster the project advances.

If you keep on waiting to do the release because of this "important
feature that holds on forever" you fail to release early and often... in
short, you fail to open the door for possible contributors.

In Cocoon there are two secrets:

1) use higher version numbers rather than lower ones
2) never wait more than two months or less than two weeks for a release.

A month frequency for releases is ideal: people usually take that amount
of time to discover bugs that were already fixed. Also new people come
in and if their own contributions don't get released soon, they walk
away and don't contribute back.

So, to sum up, I don't care _where_ you commit new functionality, as
long as this doesn't hold back the release and breaks the month-cycle.

-- 
Stefano Mazzocchi      One must still have chaos in oneself to be
                          able to give birth to a dancing star.
<st...@apache.org>                             Friedrich Nietzsche
--------------------------------------------------------------------
 Missed us in Orlando? Make it up with ApacheCON Europe in London!
------------------------- http://ApacheCon.Com ---------------------



Re: documentation

Posted by Arved Sandstrom <Ar...@chebucto.ns.ca>.
At 04:47 PM 5/22/00 +0200, Eric SCHAEFFER wrote:
>
>So, where do I commit ?
>
>Eric.
>
No harm done if you commit to 0-13-0. I buy Fotis' argument that if it's 
solid, useful code that improves the interim release that we include it there.

Arved



Re: documentation

Posted by Eric SCHAEFFER <es...@posterconseil.com>.
----- Original Message -----
From: Arved Sandstrom <Ar...@chebucto.ns.ca>
To: <fo...@xml.apache.org>
Sent: Monday, May 22, 2000 4:07 PM
Subject: Re: documentation


> Hi, Eric
>
> How much time do you need? We'd like to get a JAR out shortly (next day or
> so - incidentally, Fotis, I'm assuming that once I do up a JAR file using
> build.sh or build.bat, that I just FTP it someplace? Never really thought
> about it...)

I can try to do it now (or tomorow morning if I've got problems)

>
> I don't think that we can't have a 0-13-1 or 0-13-2 as required. The idea is
> that this new branch is for producing stable JARs over the next few months,
> while we pound on the main trunk for work on FOP 1.0. Still, significantly
> new stuff should perhaps not go in the 0-13-0 stable release, unless you are
> quite happy with it. It's up to you - I can see the arguments both ways.

I've modify my 0-13-0 checkout, but I can do it again on the latest CVS
source.
In fact, image support work using only a Jimi implementing class (it's hard
coded in the FopImageFactory class). I need to add code to use a config file.
And only /FlateDecode and /ASCIIHexDecode filters are implemented.

Code needs to be added and modified, but I wanted to have something working
for JavaOne. And Pankaj (and others) can help me if they have access to my new
classes (Pankaj has already implemented ASCII85Decode filter, and he can
implement the FopImage interface using JAI) ...

So, where do I commit ?

Eric.

>
> Arved
>
> At 02:19 PM 5/22/00 +0200, Eric SCHAEFFER wrote:
> >
> >----- Original Message -----
> >From: Fotis Jannidis <Fo...@lrz.uni-muenchen.de>
> >To: <fo...@xml.apache.org>
> >Sent: Monday, May 22, 2000 1:32 PM
> >Subject: Re: documentation
> >
> >
> >> (BTW: We have to fix Version.java to output the correct version info)
> >> Otherwise, we could release and go on.
> >> I think work on everything else goes into the preparation of release 1.0,
> >doesn't it?
> >>
> >> Fotis
> >
> >I've got something working for images (only one implementing class using
> >Jimi). Just let me some time to commit...
> >
> >Eric.
> >
> >PS: I thought to commit on 0.13.0 version. Are you ok ?
> >
> >
> >
>


Re: documentation

Posted by Arved Sandstrom <Ar...@chebucto.ns.ca>.
Hi, Eric

How much time do you need? We'd like to get a JAR out shortly (next day or 
so - incidentally, Fotis, I'm assuming that once I do up a JAR file using 
build.sh or build.bat, that I just FTP it someplace? Never really thought 
about it...)

I don't think that we can't have a 0-13-1 or 0-13-2 as required. The idea is 
that this new branch is for producing stable JARs over the next few months, 
while we pound on the main trunk for work on FOP 1.0. Still, significantly 
new stuff should perhaps not go in the 0-13-0 stable release, unless you are 
quite happy with it. It's up to you - I can see the arguments both ways.

Arved

At 02:19 PM 5/22/00 +0200, Eric SCHAEFFER wrote:
>
>----- Original Message -----
>From: Fotis Jannidis <Fo...@lrz.uni-muenchen.de>
>To: <fo...@xml.apache.org>
>Sent: Monday, May 22, 2000 1:32 PM
>Subject: Re: documentation
>
>
>> (BTW: We have to fix Version.java to output the correct version info)
>> Otherwise, we could release and go on.
>> I think work on everything else goes into the preparation of release 1.0,
>doesn't it?
>>
>> Fotis
>
>I've got something working for images (only one implementing class using
>Jimi). Just let me some time to commit...
>
>Eric.
>
>PS: I thought to commit on 0.13.0 version. Are you ok ?
>
>
>


Re: documentation

Posted by Fotis Jannidis <Fo...@lrz.uni-muenchen.de>.
From:           	"Eric SCHAEFFER" <es...@posterconseil.com>

> I've got something working for images (only one implementing class using
> Jimi). Just let me some time to commit...
> 
> Eric.
> 
> PS: I thought to commit on 0.13.0 version. Are you ok ?
 
By all means. I believe, any addition which makes this release more interesting and 
doesn't delay it too much is more than welcome. 
Fotis



Re: documentation

Posted by Eric SCHAEFFER <es...@posterconseil.com>.
----- Original Message -----
From: Fotis Jannidis <Fo...@lrz.uni-muenchen.de>
To: <fo...@xml.apache.org>
Sent: Monday, May 22, 2000 1:32 PM
Subject: Re: documentation


> (BTW: We have to fix Version.java to output the correct version info)
> Otherwise, we could release and go on.
> I think work on everything else goes into the preparation of release 1.0,
doesn't it?
>
> Fotis

I've got something working for images (only one implementing class using
Jimi). Just let me some time to commit...

Eric.

PS: I thought to commit on 0.13.0 version. Are you ok ?



Re: Image Size Issue

Posted by Clay Leeds <cl...@medata.com>.
On Jun 2, 2004, at 6:42 PM, Benjohn P. Villedo wrote:
> On 2 Jun 2004 at 14:51, Chris Bowditch wrote:
>> One possible cause is that you havent specified width and height 
>> attributes on
>> the external-graphic tag. If you dont specify the width and height, 
>> FOP
>> assumes 72dpi and if you have a high res graphic being rendered at 
>> 72dpi the
>> result is the dimensions are bigger than a page and the image will be 
>> dropped.
>> Do you see any warnings at the console when you run from the command 
>> line?
>>
>> Chris
>
> very true i didn't specify width and height because the images that
> are being attached to the pdf have variable dimensions. hmm so
> that's the behaviour of FOP. i actually tried changing the page size
> to 40x40inches and still the image did not display... as with regards
> to the command line warnings... i don't see them if there are any coz
> i just access the xml file via browser say i resulted to 1234.xml file 
>  i
> access it via http://FQDN:8080/cocoon/xmlfiles/1234.pdf hope this
> information could further give enlightenment to this case... any other
> suggestions? all the best!!!
>
> pal,
> benjohn

This link has some good information about Graphics Resolution:

http://xml.apache.org/fop/graphics.html#resolution

In partciular, it indicates that if you specify a single dimension, it 
will proprotionately scale the image to fit. Thus, you could specify a 
maximum width (or height) and it would scale to that size. You just 
need to be sure the corresponding image dimension doesn't put the image 
off the page.

Web Maestro Clay


---------------------------------------------------------------------
To unsubscribe, e-mail: fop-user-unsubscribe@xml.apache.org
For additional commands, e-mail: fop-user-help@xml.apache.org


Re: Fwd: [PATCH][svnmerge] Fix reporting message for 'avail --blocked'

Posted by Michael W Thelen <mi...@pietdepsi.com>.
Madan U Sreenivasan wrote:
> A gentle reminder to svnmerge committers about this patch....

Thanks for the patch and the reminder... if no one responds shortly,
I'll file an issue for your patch in the issue tracker.

Here's a link to the patch in the archive:
http://svn.haxx.se/dev/archive-2006-04/0539.shtml

-- 
Michael W Thelen
It is a mistake to think you can solve any major problems just with
potatoes.       -- Douglas Adams

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: apache svn server memory usage?

Posted by Branko Čibej <br...@xbc.nu>.
Chris Hecker wrote:

> The httpd process is only taking <10% of CPU when an svn command is
> running as well, and the net isn't remotely taxed.  So, I guess I need
> to look into disk IO, but I don't see why caching wouldn't take care
> of that (the machine was running with 200mb devoted to cache according
> to the last top I posted).

Caching doesn't help you when you have to fsync the database log files
at every transaction commit.


-- 
Brane Čibej   <br...@xbc.nu>   http://www.xbc.nu/brane/


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Default value for global-ignores - Import excluded .so files

Posted by Jan Hendrik <li...@gmail.com>.
Concerning Re: Default value for global-ignore
Stephen Connolly wrote on 24 Feb 2009, 18:21, at least in part:

> But these binary files may be internally compressed (even if only by
> RLL) which would make a 1 bit change have a rather large effect...

Some time ago there was a discussion on this list with respect to 
OpenOffice files which actually are ZIP archives - only that the 
average OO file still is a number smaller than images.  AFAIK 
Nikon's RAW format is heavily compressed, many TIFF are LZW 
compressed, dunno what PS does to its PSP format.

> plus chaning an image file usually results is a massive change to the
> bits of the image (even if you only tweaked the contrast by 0.1%)

Exactly.

Jan Hendrik
---------------------------------------
Freedom quote:

     One's philosophy is not best expressed in words;
     it is expressed in the choices one makes ...
     and the choices we make are ultimately our responsibility.
               -- Eleanor Roosevelt

------------------------------------------------------
http://subversion.tigris.org/ds/viewMessage.do?dsForumId=1065&dsMessageId=1222675

To unsubscribe from this discussion, e-mail: [users-unsubscribe@subversion.tigris.org].

Re: Default value for global-ignores - Import excluded .so files

Posted by Stephen Connolly <st...@gmail.com>.
But these binary files may be internally compressed (even if only by RLL)
which would make a 1 bit change have a rather large effect...

plus chaning an image file usually results is a massive change to the bits
of the image (even if you only tweaked the contrast by 0.1%)

2009/2/24 Tyler <ty...@cryptio.net>

> On Tue, Feb 24, 2009 at 04:53:57PM +0100, Jan Hendrik wrote:
> > up to 25% larger.  Now being binary stuff any minor (non-)change
> > will increase the repository by more or less the whole size. You'd
> > pretty soon run out of disk space.  There are better means to back
>
> I believe this is incorrect. svn uses binary diffs on the backend, so
> unless TIFF and .PSD are quite volatile, small changes to the file
> should result in small diffs and small increases in the file size on the
> backend.
>
> tyler
>
>

------------------------------------------------------
http://subversion.tigris.org/ds/viewMessage.do?dsForumId=1065&dsMessageId=1221809

To unsubscribe from this discussion, e-mail: [users-unsubscribe@subversion.tigris.org].

Re: Default value for global-ignores - Import excluded .so files

Posted by Tyler <ty...@cryptio.net>.
On Tue, Feb 24, 2009 at 04:53:57PM +0100, Jan Hendrik wrote:
> up to 25% larger.  Now being binary stuff any minor (non-)change 
> will increase the repository by more or less the whole size. You'd 
> pretty soon run out of disk space.  There are better means to back 

I believe this is incorrect. svn uses binary diffs on the backend, so
unless TIFF and .PSD are quite volatile, small changes to the file
should result in small diffs and small increases in the file size on the
backend.

tyler

Re: Subversion server update via yum on Fedora 2

Posted by Toby Johnson <to...@etjohnson.us>.
Rob Hills wrote:

>OK, I suspected that the "subversion-server" package may have disappeared.  I was 
>also contemplating installing the latest 1.1 release candidate but wasn't sure how to 
>use the tarballs on the subversion site.  Is there any idiot's guide to upgrading using 
>the tarballs?
>  
>
You can get FC2 rpms for Subversion 1.1 at 
http://atrpms.net/dist/fc2/subversion/ . We are using them in our 
production environment and they work fine.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Book with Style

Posted by tr...@clayst.com.
On 3 Dec 2004 Joe wrote:

> I think what Ben was referring to is after you click on "Save Page As", 
> in the "Save As" box from Windows, where it says "Save as type: Web 
> Page, HTML only", you should be able to use the drop-down and select 
> "Web Page, complete".

Ah, OK.  It looked so much like the standard Windows Save As dialog 
that I didn't realize they'd added their own file types.  Thanks.

--
Tom




---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Book with Style

Posted by Joe <sv...@freedomcircle.net>.
Hi Tom,

trlists@clayst.com wrote:

> Odd, I can't find that one, I'm on Mozilla 1.7.3 for Windows.  All it 
> has is "Save Page As" which saves only the HTML.

I think what Ben was referring to is after you click on "Save Page As", 
in the "Save As" box from Windows, where it says "Save as type: Web 
Page, HTML only", you should be able to use the drop-down and select 
"Web Page, complete".

Joe


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: [users@httpd] Newbie Question: Does mod_dir require mod_mime?

Posted by Joshua Slive <jo...@slive.ca>.
On Tue, 17 Jun 2003, Aaron Spike wrote:
> Let me make apparent my ignorance in another area: configure/make.
> Does that mean that it isn't possible to automagicaly build a custom
> configuration file for non-default installs using configure/make? If
> I had the time and knowledge could I submit a a patch to the
> configure script and makefile in the apache source tree that would
> intellegently create a custom httpd.conf for a custom install? Or is
> that sort of customization just not possible with configure/make?
> better suited for a seperate script. (a little off topic, isn't it?)

configure and make call plenty of little scripts to do this kind of work.
They already do a bunch of customization in terms of pathnames and things
like that.

The <IfModule> directives in httpd.conf are there to make things mostly
work with or without various modules, so I'm not sure exactly what kind of
customizations you are talking about.

Personally, I'm of the opinion that the default config file is to be used
for a default install.  When you change the default install, you should
expect to need to change the default config.  But people interested in
improving the install process are always welcome.  You should discuss your
ideas on dev@httpd.apache.org.

Joshua.



---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Newbie Question: Does mod_dir require mod_mime?

Posted by Aaron Spike <sa...@mlc-wels.edu>.
On 17 Jun 2003 at 10:50, Joshua Slive wrote:
> > "Make install" should
> > know that files with extensions for language negotiation are useless
> > without mod_negotiation enabled, if that is possible.
>
> There is no way that is ever going to happen.  
Let me make apparent my ignorance in another area: configure/make. 
Does that mean that it isn't possible to automagicaly build a custom 
configuration file for non-default installs using configure/make? If 
I had the time and knowledge could I submit a a patch to the 
configure script and makefile in the apache source tree that would 
intellegently create a custom httpd.conf for a custom install? Or is 
that sort of customization just not possible with configure/make? 
better suited for a seperate script. (a little off topic, isn't it?)

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Newbie Question: Does mod_dir require mod_mime?

Posted by Joshua Slive <jo...@slive.ca>.
On Tue, 17 Jun 2003, Aaron Spike wrote:
> A bit unclear on my part. If I configure apache with --disable-
> module=negotiation, make all and make install. The whole gamut of
> multilingual 'index.html's are installed. That is index.html.en,
> index.html.de, index.html.es, etc. But there is no "index.html". So
> with no negotiation and "DirectoryIndex index.html" I can't see a
> default page unless I refer to it explicitly and include the language
> extension. So my issue is with the install. "Make install" should
> know that files with extensions for language negotiation are useless
> without mod_negotiation enabled, if that is possible.

There is no way that is ever going to happen.  The default configuration
of apache works if you install with the default options.  If you choose to
install with non-default options, then you must also change the default
configuration.  In any case, the index.html.* files have no real purpose
in life anyway; you might as well just delete them.

> The worst part
> is that --disable-module=negotiation renders the online manual
> useless until I do "for file in `ls *.en`; do mv $file
> ${file%\.en};done" in a few directories.

Again, the docs are designed to be viewed through the server with a
near-default configuration.  If you don't like that, you have plenty of
other choices, including downloadable docs in several formats:
http://www.apache.org/dist/httpd/docs/
(We're still working on a new pdf version and single-language html
versions.)

Joshua.

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: Once again, Repetitive loss of connection with JDBC request and MySQL

Posted by ms...@apache.org.
Just FYI, historically, JMeter's connection pool comes from dealing with 
databases like Access, Foxpro, and Interbase, and I found that if a 
connection was allowed to be reused over and over again, memory leaks 
occurred in the very old jdbc drivers I was using (this was 5-6 years ago), and 
a maximum reuse setting was useful for clearing the memory.  I've also found 
that mysql performs much faster with fairly low max reuse settings (like 
around 50), which seems odd, but there it is.

I realize most connection pools don't even have this setting these days, and 
don't seem necessary for most modern drivers.

-Mike

On 10 Aug 2003 at 20:58, Jeremy Arnold wrote:

> Hello,
>     Having the reuse number set to 1 also makes it a pretty unfair 
> performance comparison between the new code I sent and the original 
> JMeter code.  My new code will currently always reuse the connections, 
> so it's like having the reuse number set to infinity.  Since it looks 
> like the new code is going to work okay, I'll have to do some additional 
> investigation to try to incorporate the reuse number or have some other 
> way to do more fine tuning of the pool.  Having a reuse count doesn't 
> really fit too well with the new model, although it could probably be 
> done.  On the other hand, I think this new code is probably closer to 
> how most J2EE DataSource implementations do connection pooling -- 
> reusing a connection unless it times out or fails in some odd way.
> 
>     Anyway, thanks for the input.  Please let us know if you learn 
> anything else, or if you encounter any problems with the new code.  I'll 
> do some more cleanup and investigate the above issues some more and 
then 
> commit it to CVS if/when I get something workable.
> 
> Jeremy
> 
> 
> mstover1@apache.org wrote:
> 
> >Did you try upping the reuse number?  Reuse of 1 defeats the whole point 
of 
> >having a database pool.
> >
> >  
> >
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> 




--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: Once again, Repetitive loss of connection with JDBC request and MySQL

Posted by ms...@apache.org.
Just FYI, historically, JMeter's connection pool comes from dealing with 
databases like Access, Foxpro, and Interbase, and I found that if a 
connection was allowed to be reused over and over again, memory leaks 
occurred in the very old jdbc drivers I was using (this was 5-6 years ago), and 
a maximum reuse setting was useful for clearing the memory.  I've also found 
that mysql performs much faster with fairly low max reuse settings (like 
around 50), which seems odd, but there it is.

I realize most connection pools don't even have this setting these days, and 
don't seem necessary for most modern drivers.

-Mike

On 10 Aug 2003 at 20:58, Jeremy Arnold wrote:

> Hello,
>     Having the reuse number set to 1 also makes it a pretty unfair 
> performance comparison between the new code I sent and the original 
> JMeter code.  My new code will currently always reuse the connections, 
> so it's like having the reuse number set to infinity.  Since it looks 
> like the new code is going to work okay, I'll have to do some additional 
> investigation to try to incorporate the reuse number or have some other 
> way to do more fine tuning of the pool.  Having a reuse count doesn't 
> really fit too well with the new model, although it could probably be 
> done.  On the other hand, I think this new code is probably closer to 
> how most J2EE DataSource implementations do connection pooling -- 
> reusing a connection unless it times out or fails in some odd way.
> 
>     Anyway, thanks for the input.  Please let us know if you learn 
> anything else, or if you encounter any problems with the new code.  I'll 
> do some more cleanup and investigate the above issues some more and 
then 
> commit it to CVS if/when I get something workable.
> 
> Jeremy
> 
> 
> mstover1@apache.org wrote:
> 
> >Did you try upping the reuse number?  Reuse of 1 defeats the whole point 
of 
> >having a database pool.
> >
> >  
> >
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> 




--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

Re: Once again, Repetitive loss of connection with JDBC request and MySQL

Posted by Jeremy Arnold <je...@bigfoot.com>.
Hello,
    Having the reuse number set to 1 also makes it a pretty unfair 
performance comparison between the new code I sent and the original 
JMeter code.  My new code will currently always reuse the connections, 
so it's like having the reuse number set to infinity.  Since it looks 
like the new code is going to work okay, I'll have to do some additional 
investigation to try to incorporate the reuse number or have some other 
way to do more fine tuning of the pool.  Having a reuse count doesn't 
really fit too well with the new model, although it could probably be 
done.  On the other hand, I think this new code is probably closer to 
how most J2EE DataSource implementations do connection pooling -- 
reusing a connection unless it times out or fails in some odd way.

    Anyway, thanks for the input.  Please let us know if you learn 
anything else, or if you encounter any problems with the new code.  I'll 
do some more cleanup and investigate the above issues some more and then 
commit it to CVS if/when I get something workable.

Jeremy


mstover1@apache.org wrote:

>Did you try upping the reuse number?  Reuse of 1 defeats the whole point of 
>having a database pool.
>
>  
>


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: using reactor?

Posted by Jason van Zyl <ja...@zenplex.com>.
On Tue, 2002-12-10 at 10:52, Siegfried Göschl wrote:
> I found out that the ${basedir} is set correctly and patched my 
> plugins accordingly but my JUNIT test cases loading test data have no 
> idea of a {basedir} (yet) ... I haven't looked at the BETWIXT yet.
> 
> What are the current ideas of refactoring reactor?! 

Simply clean up the inheritance so that it works properly. It should be
leak free and inherit correctly among projects. Once that is done people
can build what they like on top of something small and reliable.

> Would it make 
> sense to create a MAVEN process for each depend subprojects and use 
> the return value (TBD as far as I know) as indicator of a sucessful 
> build?!

Some things in App.java need to be moved to Maven.java in order to get a
set of useful return values. For myself this is necessary for Continuum
which does not run from the command yet needs return values from builds
as the return values are used as signifying tokens to the werkflow
process which controls the build. Once Maven tosses out some useful
return values itself then again people should be able to build decent
tools upon the reactor.

> For my 50+ subprojects thingie  the MAVEN build seems to leak memory 
> (javac) which trashes my machine and patience ... :-)

I'm in the same boat.

> Thanks
> 
> Siegfried Goeschl
> CTO
> =================================
> IT20one GmbH
> mail: siegfried.goeschl@it20one.at
> 
> 
> On 10 Dec 2002 at 9:20, Jason van Zyl wrote:
> 
> > On Tue, 2002-12-10 at 04:44, Siegfried Göschl wrote:
> > > Hi Warner
> > > 
> > > I used reactor for two projects (one small and one big) and hit IMHO
> > >  a few limitations
> > > 
> > > +) as far as I know it is not possible to inherit JAR files (B7) but
> > > +
> > > I might be wrong (there was a posting)
> > 
> > What do you mean inherit JARs? 
> > 
> > Do you mean use the same version of JAR for an entire build?
> > 
> > > +) It is not possible to inherit project.properties and 
> > > build.properties from the master project
> > > 
> > > +) Reactor use its own working directory and screws up my testcase 
> > > since the try to load test resources using a relative path 
> > 
> > Look at the betwixt tests to see how to do this. The betwixt tests run
> > in the commons reactor and work in and outside the reactor. Any
> > resources that you use in tests need to use the notion of the
> > ${basedir} which maven will set correctly. I was thinking of making a
> > simple set of abstract classes for tests to make this easier but
> > haven't got there yet. But the betwixt AbstractTestCase will give you
> > an idea of what to do to fix this.
> > 
> > > +) CheckStyle does not find its property file during a reactor build
> > 
> > That's good to know, noted.
> > 
> > > +) With 50+ subprojects I have memory problems
> > > 
> > > 
> > > Therefore I'm triggering my builds not with reactor but still using
> > > POM inheritance and POM interpolation .... and wait for the next
> > > release ... :-)
> > > 
> > > 
> > > Siegfried Goeschl
> > > CTO
> > > =================================
> > > IT20one GmbH
> > > mail: siegfried.goeschl@it20one.at
> > > phone: +43-1-9900046
> > > fax: +43-1-52 37 888
> > > www.it20one.at
> > > 
> > > 
> > > On 9 Dec 2002 at 19:08, Warner Onstine wrote:
> > > 
> > > > Hi all,
> > > > I have project that is evolving and I was wondering if using
> > > > reactor will do what I want.
> > > > 
> > > > Basically I have two modules in cvs - one for core and one for
> > > > user interface.
> > > > 
> > > > The UI will have several sub-directories for each UI that can be
> > > > used with the core. ui - tapestry - web-ognl - cocoa ...etc.
> > > > 
> > > > What I would like is to define a top-level project.xml and
> > > > project.properties which adds a dependency for the core jar and
> > > > any other necessary jars. I would also like to define a top-level
> > > > maven.xml so that whoever builds can either build all of the ui
> > > > components or individual ones.
> > > > 
> > > > So far, from what I see, this is doable. Now, here's the question:
> > > > Would it be beneficial to use reactor? Or would it be better to
> > > > write my own maven.xml.
> > > > 
> > > > -warner
> > > > 
> > > > +warner onstine+
> > > > 
> > > > 
> > > > --
> > > > To unsubscribe, e-mail:  
> > > > <ma...@jakarta.apache.org> For
> > > > additional commands, e-mail:
> > > > <ma...@jakarta.apache.org>
> > > > 
> > > 
> > > --
> > > To unsubscribe, e-mail:  
> > > <ma...@jakarta.apache.org> For
> > > additional commands, e-mail:
> > > <ma...@jakarta.apache.org>
> > -- 
> > jvz.
> > 
> > Jason van Zyl
> > jason@zenplex.com
> > http://tambora.zenplex.org
> > 
> > In short, man creates for himself a new religion of a rational
> > and technical order to justify his work and to be justified in it.
> > 
> >   -- Jacques Ellul, The Technological Society
> > 
> > 
> > --
> > To unsubscribe, e-mail:  
> > <ma...@jakarta.apache.org> For
> > additional commands, e-mail:
> > <ma...@jakarta.apache.org>
> > 
> 
> 
> --
> To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
> For additional commands, e-mail: <ma...@jakarta.apache.org>
-- 
jvz.

Jason van Zyl
jason@zenplex.com
http://tambora.zenplex.org

In short, man creates for himself a new religion of a rational
and technical order to justify his work and to be justified in it.
  
  -- Jacques Ellul, The Technological Society


Re: Extra fields in SampleResult

Posted by Jordi Salvat i Alabart <js...@atg.com>.
Mike Stover wrote:
> Yeah, the Sampler does the work, and copies data from config objects into itself.  This could 
> probably fixed by instead having the sampler hold config objects and delegate to them in the 
> event of null properties.  Then, after the sample run, the sampler can simply discard the 
> delegates.

So, we DO have solutions in case we decide cloning is an issue... we 
just need to run some performance testing to determine this :-)

BTW, I'm trying to track Oliver's work, but it's often too clever for me 
to follow without help. Oliver: a word or two from time to time on what 
you're on would be greatly appreciated.

Salut,

Jordi.



---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: Extra fields in SampleResult

Posted by Mike Stover <ms...@apache.org>.
On 25 Feb 2003 at 1:01, Jordi Salvat i Alabart wrote:

> 
> > I liked the idea of official "Property" objects from Oliver.  I think it's a rather big task though, 
> > and complicated.
> 
> Have I missed that discussion? Or are you refering to "the use of 
> special property objects" mentioned in 
> http://nagoya.apache.org/wiki/apachewiki.cgi?JMeterArchitecturalOverview ?

Well, there aren't many specifics there, and I haven't looked at Oliver's code myself.  I think 
the idea is to encapsulate some useful functionality in a Property hierarchy - with 
StringProperty, IntegerProperty, CollectionProperty, etc.  Then, when grabbing data from a 
TestElement, the Property object can arrange to deliver functions correctly, etc.

It would have a lot of advantages in terms of code consistency, facilitating functions, making 
the .jmx files very "regular".  It would have the disadvantage of making for a lot of verbose 
code, and affecting lots of classes.

> 
> Or should I go straight to Oliver's code?

It's probably worth a look.

> 
> > 
> > Also, there's another case I forgot - and the real reason Samplers are PerSampleClonable:  
> > Default config objects - they write into the Samplers, but you don't want that to become part of 
> > the Sampler permanently.  Of course, I can think of some interesting ways to deal with that 
> > too...
> 
> Is it that way? I thought it was the sampler knowing about the config 
> object, not the reverse. Maybe I should revisit those Wiki pages on 
> JMeter internals w.r.t. configs...

Yeah, the Sampler does the work, and copies data from config objects into itself.  This could 
probably fixed by instead having the sampler hold config objects and delegate to them in the 
event of null properties.  Then, after the sample run, the sampler can simply discard the 
delegates.

-Mike

> 
> Salut,
> 
> Jordi.
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 



--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: how to specify sample listener in plan

Posted by Richard Han <ha...@shaw.ca>.
It works if I just add the following to the plan

<node>
<testelement class="org.apache.jmeter.reporters.MyListener">
<property xml:space="preserve"
name="TestElement.test_class">org.apache.jmeter.reporters.MyListener</proper
ty>
</testelement>
</node>

----- Original Message -----
From: "Richard Han" <ha...@shaw.ca>
To: "JMeter Developers List" <jm...@jakarta.apache.org>
Sent: Monday, February 24, 2003 9:43 PM
Subject: how to specify sample listener in plan


> Hi,
> Please help me if you could.
>
> I am trying to let the test do something when the sampling is done. I am
> thinking writing an SampleListener, i.e.,
>
>     class MySampleListener implements SampleListener
>     {
>             public void sampleOccurred(SampleEvent sampleEvent)
>             {
>                     doSomething()
>             }
>       etc etc
>     }
>
>
> but I don't know how to specify my SampleListener in the test plan.
> BTW  I don't need any Visualizers because I am running JMeter in non-gui
> mode.
> I am using JMeter-1.8.1
>
> Thanks in advance!
>
> richard
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
>


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


how to specify sample listener in plan

Posted by Richard Han <ha...@shaw.ca>.
 Hi,
Please help me if you could.

I am trying to let the test do something when the sampling is done. I am
thinking writing an SampleListener, i.e.,

    class MySampleListener implements SampleListener
    {
            public void sampleOccurred(SampleEvent sampleEvent)
            {
                    doSomething()
            }
      etc etc
    }


but I don't know how to specify my SampleListener in the test plan.
BTW  I don't need any Visualizers because I am running JMeter in non-gui
mode.
I am using JMeter-1.8.1

Thanks in advance!

richard


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: Extra fields in SampleResult

Posted by Jordi Salvat i Alabart <js...@atg.com>.
> I liked the idea of official "Property" objects from Oliver.  I think it's a rather big task though, 
> and complicated.

Have I missed that discussion? Or are you refering to "the use of 
special property objects" mentioned in 
http://nagoya.apache.org/wiki/apachewiki.cgi?JMeterArchitecturalOverview ?

Or should I go straight to Oliver's code?

> 
> Also, there's another case I forgot - and the real reason Samplers are PerSampleClonable:  
> Default config objects - they write into the Samplers, but you don't want that to become part of 
> the Sampler permanently.  Of course, I can think of some interesting ways to deal with that 
> too...

Is it that way? I thought it was the sampler knowing about the config 
object, not the reverse. Maybe I should revisit those Wiki pages on 
JMeter internals w.r.t. configs...

Salut,

Jordi.



---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: Extra fields in SampleResult

Posted by Mike Stover <ms...@apache.org>.
The problem isn't the speed of the functions (they can easily cache their results).  the problem 
is figuring out a good way to make all the code smart enough to grab the function and tell it to 
yield a result.  Right now, most code expects strings to be held as TestElement properties.  Not 
so hard to change that.  However, some TestElements hold collections, which are often 
expected to hold strings, but could be functions.  Without detailing a clever way of handling 
the functions, all JMeter code would be forced to examine every "String" property retrieved 
from anywhere in a TestElement and ask it if it's a function, and if so, execute it.  And then, 
only when in the context of a running test.

I liked the idea of official "Property" objects from Oliver.  I think it's a rather big task though, 
and complicated.

Also, there's another case I forgot - and the real reason Samplers are PerSampleClonable:  
Default config objects - they write into the Samplers, but you don't want that to become part of 
the Sampler permanently.  Of course, I can think of some interesting ways to deal with that 
too...

I suspect you're right about the value of an object pool in this case.  I think the number of 
TestElements created may be dwarfed by the number of strings and other objects created 
from cloning.

-Mike

On 24 Feb 2003 at 17:56, Jordi Salvat i Alabart wrote:

> My experience using pools to avoid allocations is very varied. Many 
> times they don't really improve overall performance much (the cost of 
> keeping the pool and cleaning up after usage is too close to that of 
> allocating and collecting garbage). I've seen cases (bad 
> implementations) where they degrade performance significantly. 
> Complexity always grows.
> 
> I'd definitely favour more restraint in using PerSampleClonable.
> 
> Maybe the function-caused cloning could be reexamined, too? What's the 
> cost of re-evaluating a function? Looks like most functions are pretty 
> simple (actually all but the regexp ones). How often is a property of a 
> test element used twice, effectively obtaining a gain from this 
> "function result caching" feature? Could we require functions to be fast 
> (maybe by caching their own results) instead? Or maybe we could keep 
> variables and remove functions proper, moving necessary calculations to 
> pre/post-processors?
> 
> Well... just popping out ideas. I think it's too early for such drastic 
> changes. We need some real performance analysis first.
> 
> Salut,
> 
> Jordi.
> 
> Mike Stover wrote:
> > On 24 Feb 2003 at 17:13, Jordi Salvat i Alabart wrote:
> > 
> > 
> >>What about only cloning the elements when we actually need to modify 
> >>them? Or even just agree to never modify them and store the "variable" 
> >>data in the result instead?
> > 
> > 
> > For the most part, this is what happens.  TestElements are only cloned in three 
> > cases:
> > 
> > 1.  If they implement PerThreadClonable.  Not really a problem since it's obviously 
> > necessary for those that do it, and it all happens before the threads start.
> > 2.  If they have functions that get replaced with real-time values during test runs.  
> > In this case, they get cloned at the time that their functions get replaced.
> > 3. If they implement PerSampleClonable.  There's probably some wiggle room here 
> > since AbstractSampler extends PerSampleClonable perhaps unnecessarily - most 
> > samplers could probably be written so as not to modify themselves.
> > 
> > In any case, that still leaves a lot of cloning going on.  A pool of test element 
> > objects would probably still help
> > 
> > -Mike
> > 
> > 
> >>Mike Stover wrote:
> >>
> >>>>A related issue is all that cloning (of test elements) that's happening 
> >>>>behind the scenes. I've not yet had time to actually measure, but I have 
> >>>>a feeling that it is tremendously expensive -- not the cloning itself, 
> >>>>but cleaning up all the garbage afterwards. Anyone got figures?
> >>>
> >>>
> >>>It's occurred to me on more than one occasion that JMeter could use a TestElement 
> >>>pool that creates objects as needed, or provides currently unused blank objects.  It 
> >>>would require changes to code so that TestElements are returned to the pool when 
> >>>done, but it would probably be worth it.
> >>>
> >>>-Mike
> >>>
> >>>
> >>>
> >>>>Salut,
> >>>>
> >>>>Jordi.
> >>>>
> >>>>
> >>>>---------------------------------------------------------------------
> >>>>To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> >>>>For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> >>>>
> >>>
> >>>
> >>>
> >>>
> >>>--
> >>>Michael Stover
> >>>mstover1@apache.org
> >>>Yahoo IM: mstover_ya
> >>>ICQ: 152975688
> >>>AIM: mstover777
> >>>
> >>>---------------------------------------------------------------------
> >>>To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> >>>For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> >>>
> >>>
> >>
> >>
> >>
> >>---------------------------------------------------------------------
> >>To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> >>For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> >>
> > 
> > 
> > 
> > 
> > --
> > Michael Stover
> > mstover1@apache.org
> > Yahoo IM: mstover_ya
> > ICQ: 152975688
> > AIM: mstover777
> > 
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> > 
> > 
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 



--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: Extra fields in SampleResult

Posted by Jordi Salvat i Alabart <js...@atg.com>.
My experience using pools to avoid allocations is very varied. Many 
times they don't really improve overall performance much (the cost of 
keeping the pool and cleaning up after usage is too close to that of 
allocating and collecting garbage). I've seen cases (bad 
implementations) where they degrade performance significantly. 
Complexity always grows.

I'd definitely favour more restraint in using PerSampleClonable.

Maybe the function-caused cloning could be reexamined, too? What's the 
cost of re-evaluating a function? Looks like most functions are pretty 
simple (actually all but the regexp ones). How often is a property of a 
test element used twice, effectively obtaining a gain from this 
"function result caching" feature? Could we require functions to be fast 
(maybe by caching their own results) instead? Or maybe we could keep 
variables and remove functions proper, moving necessary calculations to 
pre/post-processors?

Well... just popping out ideas. I think it's too early for such drastic 
changes. We need some real performance analysis first.

Salut,

Jordi.

Mike Stover wrote:
> On 24 Feb 2003 at 17:13, Jordi Salvat i Alabart wrote:
> 
> 
>>What about only cloning the elements when we actually need to modify 
>>them? Or even just agree to never modify them and store the "variable" 
>>data in the result instead?
> 
> 
> For the most part, this is what happens.  TestElements are only cloned in three 
> cases:
> 
> 1.  If they implement PerThreadClonable.  Not really a problem since it's obviously 
> necessary for those that do it, and it all happens before the threads start.
> 2.  If they have functions that get replaced with real-time values during test runs.  
> In this case, they get cloned at the time that their functions get replaced.
> 3. If they implement PerSampleClonable.  There's probably some wiggle room here 
> since AbstractSampler extends PerSampleClonable perhaps unnecessarily - most 
> samplers could probably be written so as not to modify themselves.
> 
> In any case, that still leaves a lot of cloning going on.  A pool of test element 
> objects would probably still help
> 
> -Mike
> 
> 
>>Mike Stover wrote:
>>
>>>>A related issue is all that cloning (of test elements) that's happening 
>>>>behind the scenes. I've not yet had time to actually measure, but I have 
>>>>a feeling that it is tremendously expensive -- not the cloning itself, 
>>>>but cleaning up all the garbage afterwards. Anyone got figures?
>>>
>>>
>>>It's occurred to me on more than one occasion that JMeter could use a TestElement 
>>>pool that creates objects as needed, or provides currently unused blank objects.  It 
>>>would require changes to code so that TestElements are returned to the pool when 
>>>done, but it would probably be worth it.
>>>
>>>-Mike
>>>
>>>
>>>
>>>>Salut,
>>>>
>>>>Jordi.
>>>>
>>>>
>>>>---------------------------------------------------------------------
>>>>To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
>>>>For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
>>>>
>>>
>>>
>>>
>>>
>>>--
>>>Michael Stover
>>>mstover1@apache.org
>>>Yahoo IM: mstover_ya
>>>ICQ: 152975688
>>>AIM: mstover777
>>>
>>>---------------------------------------------------------------------
>>>To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
>>>For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
>>>
>>>
>>
>>
>>
>>---------------------------------------------------------------------
>>To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
>>For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
>>
> 
> 
> 
> 
> --
> Michael Stover
> mstover1@apache.org
> Yahoo IM: mstover_ya
> ICQ: 152975688
> AIM: mstover777
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: Before, after, and body regions

Posted by Arved Sandstrom <Ar...@chebucto.ns.ca>.
At 05:31 PM 9/23/00 -0400, Elliotte Rusty Harold wrote:
>At 8:24 PM +0200 9/18/00, Fotis Jannidis wrote:
>
>>Without the stylesheet I can only guess, but I remember that I had a
>>  problem with the counterintuitive approach to the margins in xsl:fo
>>(at least in my eyes)
>>
>>The spec says:
>>
>>>>>>
>>The position and size of the region-viewport-area is
>>specified relative to the content-rectangle of the page-reference-
>>area generated by fo:simple-page-master. The content-rectangle
>>of the region-viewport-area is indented from the content-rectangle
>>of the page-reference-area by the values of the "margin-top",
>>"margin-bottom", "margin-left" and "margin-right" properties.
>><<<<
>>
>
>That's it alright. Thanks. That is certainly counter-intuitive. If I 
>may rephrase, to make sure I have this right: the region-body 
>occupies the entire page, irrespective of the sizes of the before and 
>after and start and end regions. You set the margins of the region 
>body to keep the content drawn in the body from overlapping with the 
>content drawn in the other regions. Is that an accurate description?

I'm not sure I'd phrase it quite like that. :-) In particular, the 
region-body would not occupy the entire page, per se (although you could 
effectively make it do so). But sizing the region body to allow for other 
regions is true enough.

fo:simple-page-master has margin properties that indent in from page-width 
and page-height to set the dimensions of the content rectangle of the page 
reference area.

All of the 5 possible regions fall within this content rectangle.

The region-body margin traits are used to size the "viewport" reference area 
for the region body, which is considered to be centered, to leave room for 
the viewport areas for one or more of the before, after, start and end 
regions. The "region-reference-area" for the fo:region-body could be larger, 
so that if "overflow" is set to scroll we would have to implement scrolling. 
In any case what we see is constrained by the margins on the region body, so 
as to leave room for the other regions.

To sum up, there are 2 sets of margins to consider.

As an example, let us start with a US page, 8.5" by 11" (sorry). These 
dimensions (page width and page-height) are the dimensions of the content 
rectangle of the page viewport area. We set the simple-page-master margins 
to 0.5" all round, so the content-rectangle of the page-reference-area is 
7.5" by 10". The margins 0.5" start and end, 1" before and after, on the 
fo:region-body, give us a centered content rectangle for the region viewport 
area which is now 6.5" wide and 8" tall. We have also allowed for 
fo:region-before and fo:region-after of 1", and fo:region-start and 
fo:region-end of 0.5".

As far as the content rectangle for the region body reference area is 
concerned, in the above example, the default if fo:flow is assigned to 
region body is "paginate", which means that we construct new pages when 
overflow occurs.  In other words, as far as content in region-body is 
concerned, we have a 6.5" by 8" area to work with in the example. You can 
also modify this with space and indents, but I don't want to muddy the waters.

Hope this helps some. It's intricate stuff. :-)

Arved Sandstrom

Senior Developer
e-plicity.com (www.e-plicity.com)
Halifax, Nova Scotia
"B2B Wireless in Canada's Ocean Playground"


Re: FOP uses too much memory

Posted by Hani Elabed <ha...@elabed.net>.
Rick,

Your observations are correct. We ran into the
same problems at the Wisconsin Supreme Court.

FOP uses too much memory

Posted by Rick Rennoldson <ri...@lpa.com>.
We are using FOP to create reports in PDF format from an inventory planning system we are
developing.  The reports can be pretty large (8 mb fo document reached in early testing),
and we run out of memory running FOP on the fo file.  We increased the maximum memory
allotted to the JVM to 200 mb, and still run out of memory.  Am I doing something wrong?
Does FOP absolutely need to load the entire document into memory at once?  Any help would
be appreciated, as I think that I will be prohibited from using this technology until it
is less memory intensive.

Thanks,

Rick
--
Rick Rennoldson
Systems Consultant
LPA Inc.
http://www.lpa.com
Rick_Rennoldson@lpa.com
(716) 419-3169



Re: Once again, Repetitive loss of connection with JDBC request and MySQL

Posted by Jeremy Arnold <je...@bigfoot.com>.
Hello,
    Having the reuse number set to 1 also makes it a pretty unfair 
performance comparison between the new code I sent and the original 
JMeter code.  My new code will currently always reuse the connections, 
so it's like having the reuse number set to infinity.  Since it looks 
like the new code is going to work okay, I'll have to do some additional 
investigation to try to incorporate the reuse number or have some other 
way to do more fine tuning of the pool.  Having a reuse count doesn't 
really fit too well with the new model, although it could probably be 
done.  On the other hand, I think this new code is probably closer to 
how most J2EE DataSource implementations do connection pooling -- 
reusing a connection unless it times out or fails in some odd way.

    Anyway, thanks for the input.  Please let us know if you learn 
anything else, or if you encounter any problems with the new code.  I'll 
do some more cleanup and investigate the above issues some more and then 
commit it to CVS if/when I get something workable.

Jeremy


mstover1@apache.org wrote:

>Did you try upping the reuse number?  Reuse of 1 defeats the whole point of 
>having a database pool.
>
>  
>


Re: Once again, Repetitive loss of connection with JDBC request and MySQL

Posted by ms...@apache.org.
Did you try upping the reuse number?  Reuse of 1 defeats the whole point of 
having a database pool.

-Mike

On 10 Aug 2003 at 17:10, serge van Thiel wrote:

> Well as I said, I first wanted to have a robust scenario to start from and to 
> further elaborate it. 
> So, in order to give you feedback, I made 2 tests : the first with a pool of 1 
> connection and a reuse factor of 1. The second is with a pool of 10 and a 
> reuse of 1. Those values have been defined in a "Database Connection Pool 
> defaults" entry.
> I guess it does not represent something "real" in terms of database access 
but 
> it does not crash anymore, which is the primary condition to enable further 
> investigation.
> Cheers,
> 
> Serge
> On Sunday 10 August 2003 04:54 pm, mstover1@apache.org wrote:
> > That's interesting - I'm curious, what numbers did you choose for number 
of
> > connections in the pool and number of re-uses allowed?
> >
> > -Mike
> >
> > On 10 Aug 2003 at 16:45, serge van Thiel wrote:
> > > Hi Mike and Jeremy,
> > >
> > > Thank you very much for your replies and sorry for being late in my
> > > reaction. Here are the results of my tests :
> > > Rel1.9 standard keeps crashing for the same reason on my very simple 
one
> > > thread SELECT.
> > > Rel1.9 with the 2 new jars from Jeremy does not crash at all and is, in
> > > that oversimplified test, even 50 times faster than any previous result I
> > > have been recording.
> > > The logfile (jmeter.log) does not show any abnormal behaviour as far as 
I
> >
> > can
> >
> > > see. It even includes much more details.
> > > Given the few and non exhaustive tests I have made, It seems that you
> >
> > made a
> >
> > > great job.
> > >
> > > Thanks again,
> > >
> > > Serge van Thiel
> > > Ph.:+32 2 3752277
> > > Mob.:+32 477 414543
> > > email: serge.vanthiel@skynet.be
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> 




--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: Once again, Repetitive loss of connection with JDBC request and MySQL

Posted by ms...@apache.org.
Did you try upping the reuse number?  Reuse of 1 defeats the whole point of 
having a database pool.

-Mike

On 10 Aug 2003 at 17:10, serge van Thiel wrote:

> Well as I said, I first wanted to have a robust scenario to start from and to 
> further elaborate it. 
> So, in order to give you feedback, I made 2 tests : the first with a pool of 1 
> connection and a reuse factor of 1. The second is with a pool of 10 and a 
> reuse of 1. Those values have been defined in a "Database Connection Pool 
> defaults" entry.
> I guess it does not represent something "real" in terms of database access 
but 
> it does not crash anymore, which is the primary condition to enable further 
> investigation.
> Cheers,
> 
> Serge
> On Sunday 10 August 2003 04:54 pm, mstover1@apache.org wrote:
> > That's interesting - I'm curious, what numbers did you choose for number 
of
> > connections in the pool and number of re-uses allowed?
> >
> > -Mike
> >
> > On 10 Aug 2003 at 16:45, serge van Thiel wrote:
> > > Hi Mike and Jeremy,
> > >
> > > Thank you very much for your replies and sorry for being late in my
> > > reaction. Here are the results of my tests :
> > > Rel1.9 standard keeps crashing for the same reason on my very simple 
one
> > > thread SELECT.
> > > Rel1.9 with the 2 new jars from Jeremy does not crash at all and is, in
> > > that oversimplified test, even 50 times faster than any previous result I
> > > have been recording.
> > > The logfile (jmeter.log) does not show any abnormal behaviour as far as 
I
> >
> > can
> >
> > > see. It even includes much more details.
> > > Given the few and non exhaustive tests I have made, It seems that you
> >
> > made a
> >
> > > great job.
> > >
> > > Thanks again,
> > >
> > > Serge van Thiel
> > > Ph.:+32 2 3752277
> > > Mob.:+32 477 414543
> > > email: serge.vanthiel@skynet.be
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> 




--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

Re: Once again, Repetitive loss of connection with JDBC request and MySQL

Posted by serge van Thiel <se...@skynet.be>.
Well as I said, I first wanted to have a robust scenario to start from and to 
further elaborate it. 
So, in order to give you feedback, I made 2 tests : the first with a pool of 1 
connection and a reuse factor of 1. The second is with a pool of 10 and a 
reuse of 1. Those values have been defined in a "Database Connection Pool 
defaults" entry.
I guess it does not represent something "real" in terms of database access but 
it does not crash anymore, which is the primary condition to enable further 
investigation.
Cheers,

Serge
On Sunday 10 August 2003 04:54 pm, mstover1@apache.org wrote:
> That's interesting - I'm curious, what numbers did you choose for number of
> connections in the pool and number of re-uses allowed?
>
> -Mike
>
> On 10 Aug 2003 at 16:45, serge van Thiel wrote:
> > Hi Mike and Jeremy,
> >
> > Thank you very much for your replies and sorry for being late in my
> > reaction. Here are the results of my tests :
> > Rel1.9 standard keeps crashing for the same reason on my very simple one
> > thread SELECT.
> > Rel1.9 with the 2 new jars from Jeremy does not crash at all and is, in
> > that oversimplified test, even 50 times faster than any previous result I
> > have been recording.
> > The logfile (jmeter.log) does not show any abnormal behaviour as far as I
>
> can
>
> > see. It even includes much more details.
> > Given the few and non exhaustive tests I have made, It seems that you
>
> made a
>
> > great job.
> >
> > Thanks again,
> >
> > Serge van Thiel
> > Ph.:+32 2 3752277
> > Mob.:+32 477 414543
> > email: serge.vanthiel@skynet.be
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org

Re: W3C <-> Apache

Posted by Rodent of Unusual Size <Ke...@Golux.Com>.
Henrik Frystyk Nielsen wrote:
> 
> I suggest that instead of throwing a lot of big (and largely meaningless)
> words at each other, I think we should work on how to proceed. This
> requires, however, that we start using a more constructive tone instead of
> calling each other for shit heads.

Sounds good to me.  Remember that's how you should interpret my
remarks below..

> As I said, I would very much like to get more input from the Apache group -
> I do need people who can do real work. I have made the conditions clear so
> here you go - it's up to you!

Maybe it isn't obvious, and maybe it's just my opinion - but I should
think that the group responsible for designing HTTP-NG would do well
to *solicit* input from the group responsible for producing the most
popular Web server on the planet.  Instead, we sort of get into this
discussion by accident, and what I'm seeing could easily be construed
as "here are the rules: give us 50% of your time or don't expect us
to take you seriously."

That doesn't sound like a collaboration; it sounds positively
adversarial and elitist.  I doubt that's the intent at all, but I
wanted you to know how it could be interpreted.

I think it incredibly unlikely (less than 0.05 probability) that anyone
in The Apache Group can dedicate 50% of their time to HTTP-NG.  Is there
any hope of equivalency?  You know, "BsC *or* 5 years experience
required"?  We have the experience (well, as a group); can't that
be taken in lieu of dedicated time in terms of measuring impact?

#ken	P-)}

Re: VirtualHost beinflusst ein PHP script

Posted by James Blond <jb...@gmail.com>.
Hm, etwas merkwürdige Programmierung,
immer gehen $_SERVER["SCRIPT_NAME"] und $_SERVER["REQUEST_URI"] und
$_SERVER["PHP_SELF"]

Gruß
Mario

On Fri, Oct 31, 2008 at 3:08 PM, Fabian Cenedese <Ce...@indel.ch> wrote:
> Ich habe mittlerweile gesehen, dass der Unterschied die PHP Variablen
> SCRIPT_URL und SCRIPT_URI sind, welche beim VirtualHost nicht
> gesetzt werden. Scheinbar hat das mit mod_rewrite zu tun. Wenn
> ich die vorhandene Regel auch von global zum VirtualHost verschiebe,
> dann scheint's zu klappen.
>
> http://issues.ez.no/8834
>
> Danke
>
> bye  Fabi

--------------------------------------------------------------------------
                Apache HTTP Server Mailing List "users-de" 
      unsubscribe-Anfragen an users-de-unsubscribe@httpd.apache.org
           sonstige Anfragen an users-de-help@httpd.apache.org
--------------------------------------------------------------------------


Re: VirtualHost beinflusst ein PHP script

Posted by Fabian Cenedese <Ce...@indel.ch>.
At 14:23 31.10.2008 +0100, James Blond wrote:
>Hallo Fabian,
>zwei mal die gleiche DocumentRoot ist auch ungewöhnlich.
>Aber auf jeden Fall brauchst Du auch in deinem vhost auch eine
>"Verwaltungs" Direktive.
>
>z.B.
>
><Directory "/opt/apache/htdocs">
>    Options Indexes FollowSymLinks
>    AllowOverride All
>    Order allow,deny
>    Allow from all
></Directory>
>
>Zum anderen empfehle ich den PHP error log zu aktivieren in der php.ini
>
>log_errors = On
>error_log = /opt/apache/logs/phperror.log

Ich habe mittlerweile gesehen, dass der Unterschied die PHP Variablen
SCRIPT_URL und SCRIPT_URI sind, welche beim VirtualHost nicht
gesetzt werden. Scheinbar hat das mit mod_rewrite zu tun. Wenn
ich die vorhandene Regel auch von global zum VirtualHost verschiebe,
dann scheint's zu klappen.

http://issues.ez.no/8834

Danke

bye  Fabi



--------------------------------------------------------------------------
                Apache HTTP Server Mailing List "users-de" 
      unsubscribe-Anfragen an users-de-unsubscribe@httpd.apache.org
           sonstige Anfragen an users-de-help@httpd.apache.org
--------------------------------------------------------------------------


Re: How do tabs.xml and site.xml interact?

Posted by Jeff Turner <je...@apache.org>.
On Sat, Feb 08, 2003 at 08:31:34AM +0100, Ed Steenhoek wrote:
> On 8 Feb 2003 at 18:03, Jeff Turner wrote:
...
> > Ah okay.  Then you need to create a book.xml file in your site's
> > content/xdocs directory.  This allows you to specify exactly what you
> > want in the root menu.  Attached is a book.xml for the sample webapp
> > ('forrest seed').
> > 
> > --Jeff
> > 
> 
> OK. Will do trials.
> 
> I think this is beginning to become a design issue: I would have 
> expected that this behaviour would also be generated from site.xml 
> making usage of book.xml obsolete.

It is a design issue; specifically we need a flexible way of creating
'views' (in the SQL sense) of site.xml.

We also need a more flexible XML format for representing menus (book.xml
is limited to 2 levels, and menus aren't clickable), and stylesheets that
take advantage of the new format.  So much to do, so little time..

--Jeff

> But first I have to become more familiar with all the aspects of 
> Forrest to even think if starting a discussion on this.
> 
> Thanks for now.
> 
> Ed
> 
> 

Re: WAS: BUG 11536 - throughput calculation

Posted by Jordi Salvat i Alabart <js...@atg.com>.
Hi Wolfram.

A while ago I submitted an enhancement request to Bugzilla for a Timer 
which waits for a certain amount of time minus whatever the previous 
request has taken.

I think this would solve your issue, since you can say:

- Repeat
   - Get this page
   - Wait for 30 seconds (minus the time it took to get the page)

To generate a traffic of exactly 2 requests per minute.

I'm thinking it would be even greater if you could use this with the 
time a whole controller took to run:

- Repeat
   - Simple controller
     - get this page
     - get that page
     - submit this form
   - Wait for 60 seconds (minus bla bla)

To generate a traffic of exactly 1 submission per minute (provided the 
whole controller completes in less than 1 minute, of course).

I'll try to find time to add this to the enhancement request... and to 
give a crack at implementing it :-)

Salut,

Jordi.

Wolfram Rittmeyer wrote:
> Mike wrote:
> 
> 
>>But, here's the problem: The server didn't actually serve up 10 requests per second.  It was
>>only asked to serve up some fraction of that by JMeter - for 5 seconds between each request,
>>JMeter "gave the server a break", so to speak.  I think developers would be incorrect to think
>>that the 10 requests/second calculation represents what their server could have done.  You
>>don't know that.  If you take away JMeter's 5 second delay, you might find the numbers get
>>worse - it depends since servers can get bogged down unexpectedly, and performance can
>>go in the toilet without warning as you increase number of hits attempted.
>>
>>Although your calculation seems to indicate the server can handle 10 requests per second,
>>you might find that in actuality, it can never reach that.  In which case, I have to ask - what is
>>the value of that calculation?
>>
>>The current calculation gives a cold, hard number - this is the number of requests that the
>>server actually handled.  It doesn't represent a theoretical max, or even a theoretical max given
>>the limited # of threads.  It's the actual.  to find your server's max throughput, you have to run
>>multiple tests, changing the number of users.  As you increase the number of simultaneous
>>users, the actual throughput will generally increase, until you reach a point at which your
>>server either plateau's or begins to decline.  This represents your server's maximum
>>throughput.  That is my thinking.  Feel free to set me straight :-)
>>
>>-Mike
>>
> 
> 
> Even this is somewhat hypothetical. At least if some pages need more
> processing-time than others. You would know which is more time-intensive
> of course, but how often which page is really called, that's a thing you
> just get to know when your page (in the case of HTTP-requests) is up and
> real users use it. They decide wether they want the page to be
> recomputed by changing some parameters and so on. If you just request
> this time-intensive page two times in your test but it get's called four
> times on average in reality, your calculated throughput will quickly get
> a number with no counterpart in reality. Therefore, to be honest, every
> throughput calculation in advance is somewhat hypothetical, isn't it. 
> 
> So, finally, I must admit that changing the GraphVisualizers doesn't
> make much sense ;-) Anyhow I think the discussion was fertile (or
> insightful or whatever one would say in English)...
> 
> Nevertheless, as a result of that I want to suggest something else (I
> will post an enhancement request later on when I have my thoughts a bit
> more straight), but since I'd like to discuss the usefullness, I just
> include it in here:
> 
> 1. Another Visualizer that visualizes the contents of a
> ThreadGroups-test by each Sampler. So you get the avg, mean, and
> deviation for every HttpRquest. Maybe a BarChart or something like that.
> 
> 2. Maybe it could be useful (could it?) to set up a timer or something
> else to ensure that with a given number of users your throughput is met.
> That is you set up the ThreadGroup as usual. But you add something where
> you can input the throughput you want to guarantee. If you run your
> tests you will receive a failure if the throughput cannot be met or a
> value indicating how much sleep-time remains between individual requests
> per user. This one might be helpful in determining if addition to a
> given path. E.g.: I have contributed to a web-app where the user had to
> follow a given path to get a map in the end [which is the most
> time-consuming page of course]. So if I want to add some steps and some
> processing in between, could we probably meet the throughput-goal or
> not. Yet I'm not so sure about that (especially since I guess it would
> be some work to realize it). Hm, after thinking a short while about
> this, I think one would get the result as quickly by the exiting
> GraphVisualizer. But maybe my confused thoughts elicit better and more
> useful thoughts by others...
> 
> Greetings,
> 
> Wolfram
> 
> --
> To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
> For additional commands, e-mail: <ma...@jakarta.apache.org>
> 
> 




--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: WAS: BUG 11536 - throughput calculation

Posted by Mike Stover <ms...@apache.org>.
On 14 Aug 2002 at 12:08, Wolfram Rittmeyer wrote:

> 
> Mike wrote:
> 
> > But, here's the problem: The server didn't actually serve up 10 requests per second.  It was
> > only asked to serve up some fraction of that by JMeter - for 5 seconds between each request,
> > JMeter "gave the server a break", so to speak.  I think developers would be incorrect to think
> > that the 10 requests/second calculation represents what their server could have done.  You
> > don't know that.  If you take away JMeter's 5 second delay, you might find the numbers get
> > worse - it depends since servers can get bogged down unexpectedly, and performance can
> > go in the toilet without warning as you increase number of hits attempted.
> > 
> > Although your calculation seems to indicate the server can handle 10 requests per second,
> > you might find that in actuality, it can never reach that.  In which case, I have to ask - what is
> > the value of that calculation?
> > 
> > The current calculation gives a cold, hard number - this is the number of requests that the
> > server actually handled.  It doesn't represent a theoretical max, or even a theoretical max given
> > the limited # of threads.  It's the actual.  to find your server's max throughput, you have to run
> > multiple tests, changing the number of users.  As you increase the number of simultaneous
> > users, the actual throughput will generally increase, until you reach a point at which your
> > server either plateau's or begins to decline.  This represents your server's maximum
> > throughput.  That is my thinking.  Feel free to set me straight :-)
> > 
> > -Mike
> > 
> 
> Even this is somewhat hypothetical. At least if some pages need more
> processing-time than others. You would know which is more time-intensive
> of course, but how often which page is really called, that's a thing you
> just get to know when your page (in the case of HTTP-requests) is up and
> real users use it. They decide wether they want the page to be
> recomputed by changing some parameters and so on. If you just request
> this time-intensive page two times in your test but it get's called four
> times on average in reality, your calculated throughput will quickly get
> a number with no counterpart in reality. Therefore, to be honest, every
> throughput calculation in advance is somewhat hypothetical, isn't it. 
> 
> So, finally, I must admit that changing the GraphVisualizers doesn't
> make much sense ;-) Anyhow I think the discussion was fertile (or
> insightful or whatever one would say in English)...

I think there's room in the graph visualizer for a few more numbers at the bottom - like Berin's 
"idealized" throughput calculation.  I don't know about drawing more lines in the graph itself 
though.

Entirely new visualizers are always a possibility to.  A useful visualizer might be one that has 
tabs across the top to let you quickly switch views between multiple visualizers.

> 
> Nevertheless, as a result of that I want to suggest something else (I
> will post an enhancement request later on when I have my thoughts a bit
> more straight), but since I'd like to discuss the usefullness, I just
> include it in here:
> 
> 1. Another Visualizer that visualizes the contents of a
> ThreadGroups-test by each Sampler. So you get the avg, mean, and
> deviation for every HttpRquest. Maybe a BarChart or something like that.

Check out the Aggregate Report.  It gives you this data per different request.  I use it to 
pinpoint which requests are giving all the trouble - time wise and error-wise.  It could use 
enhancement, but it's quite useful as it is.

> 
> 2. Maybe it could be useful (could it?) to set up a timer or something
> else to ensure that with a given number of users your throughput is met.
> That is you set up the ThreadGroup as usual. But you add something where
> you can input the throughput you want to guarantee. If you run your
> tests you will receive a failure if the throughput cannot be met or a
> value indicating how much sleep-time remains between individual requests
> per user. This one might be helpful in determining if addition to a
> given path. E.g.: I have contributed to a web-app where the user had to
> follow a given path to get a map in the end [which is the most
> time-consuming page of course]. So if I want to add some steps and some
> processing in between, could we probably meet the throughput-goal or
> not. Yet I'm not so sure about that (especially since I guess it would
> be some work to realize it). Hm, after thinking a short while about
> this, I think one would get the result as quickly by the exiting
> GraphVisualizer. But maybe my confused thoughts elicit better and more
> useful thoughts by others...

Sounds similar to thoughts I'd had about making JMeter adjust test settings to discover various 
things about a site.  For instance, gradually increasing the number of threads to find the 
saturation point of the server.  Similarly, decreasing the delays could be part of that, and 
Jordi's throughput timer idea is probably most useful in that respect.  

-Mike 

> 
> Greetings,
> 
> Wolfram
> 
> --
> To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
> For additional commands, e-mail: <ma...@jakarta.apache.org>
> 



--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688

--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


WAS: BUG 11536 - throughput calculation

Posted by Wolfram Rittmeyer <wo...@web.de>.
Mike wrote:

> But, here's the problem: The server didn't actually serve up 10 requests per second.  It was
> only asked to serve up some fraction of that by JMeter - for 5 seconds between each request,
> JMeter "gave the server a break", so to speak.  I think developers would be incorrect to think
> that the 10 requests/second calculation represents what their server could have done.  You
> don't know that.  If you take away JMeter's 5 second delay, you might find the numbers get
> worse - it depends since servers can get bogged down unexpectedly, and performance can
> go in the toilet without warning as you increase number of hits attempted.
> 
> Although your calculation seems to indicate the server can handle 10 requests per second,
> you might find that in actuality, it can never reach that.  In which case, I have to ask - what is
> the value of that calculation?
> 
> The current calculation gives a cold, hard number - this is the number of requests that the
> server actually handled.  It doesn't represent a theoretical max, or even a theoretical max given
> the limited # of threads.  It's the actual.  to find your server's max throughput, you have to run
> multiple tests, changing the number of users.  As you increase the number of simultaneous
> users, the actual throughput will generally increase, until you reach a point at which your
> server either plateau's or begins to decline.  This represents your server's maximum
> throughput.  That is my thinking.  Feel free to set me straight :-)
> 
> -Mike
> 

Even this is somewhat hypothetical. At least if some pages need more
processing-time than others. You would know which is more time-intensive
of course, but how often which page is really called, that's a thing you
just get to know when your page (in the case of HTTP-requests) is up and
real users use it. They decide wether they want the page to be
recomputed by changing some parameters and so on. If you just request
this time-intensive page two times in your test but it get's called four
times on average in reality, your calculated throughput will quickly get
a number with no counterpart in reality. Therefore, to be honest, every
throughput calculation in advance is somewhat hypothetical, isn't it. 

So, finally, I must admit that changing the GraphVisualizers doesn't
make much sense ;-) Anyhow I think the discussion was fertile (or
insightful or whatever one would say in English)...

Nevertheless, as a result of that I want to suggest something else (I
will post an enhancement request later on when I have my thoughts a bit
more straight), but since I'd like to discuss the usefullness, I just
include it in here:

1. Another Visualizer that visualizes the contents of a
ThreadGroups-test by each Sampler. So you get the avg, mean, and
deviation for every HttpRquest. Maybe a BarChart or something like that.

2. Maybe it could be useful (could it?) to set up a timer or something
else to ensure that with a given number of users your throughput is met.
That is you set up the ThreadGroup as usual. But you add something where
you can input the throughput you want to guarantee. If you run your
tests you will receive a failure if the throughput cannot be met or a
value indicating how much sleep-time remains between individual requests
per user. This one might be helpful in determining if addition to a
given path. E.g.: I have contributed to a web-app where the user had to
follow a given path to get a map in the end [which is the most
time-consuming page of course]. So if I want to add some steps and some
processing in between, could we probably meet the throughput-goal or
not. Yet I'm not so sure about that (especially since I guess it would
be some work to realize it). Hm, after thinking a short while about
this, I think one would get the result as quickly by the exiting
GraphVisualizer. But maybe my confused thoughts elicit better and more
useful thoughts by others...

Greetings,

Wolfram

--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


RE: Externals using absolute path on Windows

Posted by David Hickman <da...@audleytravel.com>.
David Hickman <da...@audleytravel.com> writes:

> Is anyone aware of any work arounds for issue number 4073?  I guess 
> using relative paths is likely to work but that would require changes 
> to a number of our projects that we would rather not tackle at this 
> time.  Are there any special characters or special methods of 
> formatting the path that I can use to get the externals to work?  If 
> not does anyone have any idea about the timescales for the version in 
> which this issue is likely to be resolved?

Allowing absolute paths in older versions on Windows was inadvertant.
Adding support is not currently planned since allowing the server to place files at known locations outside the working copy has security implications.

--
Philip


>>>>>>>

Hi Philip,

Thanks for your reply.  I received a response on the Tortoise SVN mailing list that this bug had already been fixed in trunk and would be back ported to 1.7 branch.  Is it a definite that support for absolute paths will not exist in the future?


Best Regards,
David

Re: Building NetBeans 11.1 from source fails with compile errors

Posted by Geertjan Wielenga <ge...@apache.org>.
The README explains it, if not file an issue there or provide a pull
request:

https://github.com/apache/netbeans/blob/master/README.md

Gj

On Sun, Oct 6, 2019 at 3:47 PM Eric Bresie <eb...@gmail.com> wrote:

> So is the general principle that NB production is built with Long Term
> Release (8, 11, etc.) JDK versions?
>
> Just for clarification, what is needed to build with a newer version of
> java? Does it involve:
> (1) Setting netbeans config file?
> (2) Setting JAVA_HOME variable?
> (3) Setting PATH variable?
> (4) Setting alternative-version?
> (5) Setting target/source at javac parameter?
> (6) Any additional build settings?
>
> This all assumes there is a need for newer version due to new feature (I.e.
> module, switch expressions, etc.)
>
> Eric
>
> On Sat, Oct 5, 2019 at 5:34 AM Geertjan Wielenga <ge...@apache.org>
> wrote:
>
> > >
> > > 2. I did't quite understand this:
> > >
> > > On Wed, 02 Oct 2019 22:50:55 +0200 Matthias Bläsing wrote:
> > >
> > > > While netbeans can be build with JDK 9, that is not the production
> > > > configuration, so you could introduce dependencies on newer
> > > > implementations without realising it, before the problem is caught by
> > > > the CI pipeline.
> > >
> > > (a) What is "CI pipeline"?
> > > (b) What are these dependencies and what would be the negative result
> in
> > > introducing them?
> > >
> >
> >
> > Continuous integration pipeline:
> > https://builds.apache.org/job/netbeans-linux/
> >
> > If you're going to write code using new language features introduced in
> JDK
> > 9 or later, and then commit that to Apache NetBeans GitHub, that will be
> > caught by the CI pipeline since the production configuration is JDK 8.
> >
> > Hope it helps,
> >
> > Gj
> >
> >
> >
> >
> > On Sat, Oct 5, 2019 at 12:29 PM mlist <ml...@riseup.net> wrote:
> >
> > > Thanks Geertjan.
> > >
> > > I hope you (or someone else) can answer the other questions too.
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
> > > For additional commands, e-mail: dev-help@netbeans.apache.org
> > >
> > > For further information about the NetBeans mailing lists, visit:
> > > https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists
> > >
> > >
> > >
> > >
> >
> --
> Eric Bresie
> ebresie@gmail.com
> http://www.linkedin.com/in/ebresie
>

Re: Building NetBeans 11.1 from source fails with compile errors

Posted by Eric Bresie <eb...@gmail.com>.
So is the general principle that NB production is built with Long Term
Release (8, 11, etc.) JDK versions?

Just for clarification, what is needed to build with a newer version of
java? Does it involve:
(1) Setting netbeans config file?
(2) Setting JAVA_HOME variable?
(3) Setting PATH variable?
(4) Setting alternative-version?
(5) Setting target/source at javac parameter?
(6) Any additional build settings?

This all assumes there is a need for newer version due to new feature (I.e.
module, switch expressions, etc.)

Eric

On Sat, Oct 5, 2019 at 5:34 AM Geertjan Wielenga <ge...@apache.org>
wrote:

> >
> > 2. I did't quite understand this:
> >
> > On Wed, 02 Oct 2019 22:50:55 +0200 Matthias Bläsing wrote:
> >
> > > While netbeans can be build with JDK 9, that is not the production
> > > configuration, so you could introduce dependencies on newer
> > > implementations without realising it, before the problem is caught by
> > > the CI pipeline.
> >
> > (a) What is "CI pipeline"?
> > (b) What are these dependencies and what would be the negative result in
> > introducing them?
> >
>
>
> Continuous integration pipeline:
> https://builds.apache.org/job/netbeans-linux/
>
> If you're going to write code using new language features introduced in JDK
> 9 or later, and then commit that to Apache NetBeans GitHub, that will be
> caught by the CI pipeline since the production configuration is JDK 8.
>
> Hope it helps,
>
> Gj
>
>
>
>
> On Sat, Oct 5, 2019 at 12:29 PM mlist <ml...@riseup.net> wrote:
>
> > Thanks Geertjan.
> >
> > I hope you (or someone else) can answer the other questions too.
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
> > For additional commands, e-mail: dev-help@netbeans.apache.org
> >
> > For further information about the NetBeans mailing lists, visit:
> > https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists
> >
> >
> >
> >
>
-- 
Eric Bresie
ebresie@gmail.com
http://www.linkedin.com/in/ebresie

Re: Building NetBeans 11.1 from source fails with compile errors

Posted by Geertjan Wielenga <ge...@apache.org>.
In 11.2 this is not needed, betas are available, use them.

Gj

On Sat, 5 Oct 2019 at 18:44, mlist <ml...@riseup.net> wrote:

> Something weird happened today.
>
> I simply started my freshly built netbeans 11.1 and it showed a dialog
> box with the text:
>
> "Warning - could not install some modules: Nashorn Integration - No
> module providing the capability com.oracle.js.parser.implementation
> could be found. 19 further modules could not be installed due to the
> above problems."
>
> I had the options to disable these modules and continue or to exit.
>
> How come this is happening all of a sudden?
> Is there a fix? (I don't know even what these modules are)
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
> For additional commands, e-mail: dev-help@netbeans.apache.org
>
> For further information about the NetBeans mailing lists, visit:
> https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists
>
>
>
>

Re: Building NetBeans 11.1 from source fails with compile errors

Posted by mlist <ml...@riseup.net>.
On Sat, 5 Oct 2019 18:09:09 +0100 Neil C Smith wrote:

> At some point, I presume you've used another build of NetBeans 11.1
> (or upgraded from earlier version) somewhere in user space?

Yes. Before building 11.1 I was using 8.2. After first run of 11.1 it
imported everything from 8.2.

I have no idea what this (19?) modules are about and which ones I need
or not. Can you please share a link with info?

> I'm glad we will no longer have this issue in 11.2!  Try building the
> beta of that instead, and report back problems. ;-)

Will do. Thanks! I hope it is stable :)


OT:

BTW when some of you "reply to all" instead of sending to list only I
receive the messages twice - once from the list and once sent directly
to me. ;)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.2-beta1

Posted by mlist <ml...@riseup.net>.
On Sun, 6 Oct 2019 13:03:16 +0200 Geertjan Wielenga wrote:

> If it builds and runs, that’s great. Don’t worry about the messages.

Thanks.
What about the version number? Is it normal?

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.2-beta1

Posted by Geertjan Wielenga <ge...@apache.org>.
So, that’s great.

If it builds and runs, that’s great. Don’t worry about the messages.

Gj

On Sun, 6 Oct 2019 at 12:52, mlist <ml...@riseup.net> wrote:

> On Sat, 5 Oct 2019 18:09:09 +0100 Neil C Smith wrote:
>
> > I'm glad we will no longer have this issue in 11.2!  Try building the
> > beta of that instead, and report back problems. ;-)
>
> OK, I have built 11.2 (from GitHub's tag '11.2-beta1').
>
> Help->About shows:
>
> Apache NetBeans IDE DEV (Build 20191006-unknown-revn)
>
> Also running it from console shows quite a few WARNING and INFO messages.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
> For additional commands, e-mail: dev-help@netbeans.apache.org
>
> For further information about the NetBeans mailing lists, visit:
> https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists
>
>
>
>

Re: Building NetBeans 11.2-beta1

Posted by mlist <ml...@riseup.net>.
Thank you!


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.2

Posted by mlist <ml...@riseup.net>.
I see.
Thanks again.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.2

Posted by Neil C Smith <ne...@apache.org>.
On Thu, 21 Nov 2019, 09:56 mlist, <ml...@riseup.net> wrote:

>
> Something strange though. When I started the build of 11.2 it wanted to
> import settings from 11.1, not from 11.2-beta (which itself had already
> imported the settings of 11.1 earlier).
>
> Why is that?
>

Not strange, just how it works. Only importing from releases is currently
supported there, although Tools / Options / Import should allow for copying
some things over from the beta.

Someone mentioned working on changing the import code recently? We need
something better here as it's also the one thing that needs manual updating
between releases still.

Best wishes,

Neil

>

Re: Building NetBeans 11.2

Posted by mlist <ml...@riseup.net>.
On Wed, 20 Nov 2019 22:12:49 +0000 Neil C Smith wrote:

> As I already said, if you're building from git change the metabuild hash on
> the ant build line to match 11.2

Oh. Finally I got that.
Thanks!

Something strange though. When I started the build of 11.2 it wanted to
import settings from 11.1, not from 11.2-beta (which itself had already
imported the settings of 11.1 earlier).

Why is that?

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.2

Posted by Neil C Smith <ne...@apache.org>.
On Wed, 20 Nov 2019, 22:04 mlist, <ml...@riseup.net> wrote:

>
> I checked out with tag '11.2' but after building Help->About still
> shows 11.2-beta.
>

As I already said, if you're building from git change the metabuild hash on
the ant build line to match 11.2

Best wishes,

Neil

>

Re: Building NetBeans 11.2

Posted by mlist <ml...@riseup.net>.
On Wed, 20 Nov 2019 09:16:03 +0000 Neil C Smith wrote:

> You need to change the hash on the build line to match the git hash of the
> release. Or check the gitinfo.properties file in the source zip download.
> In fact, building from source zip recommended. Or if not, the git tag. The
> release branch is ahead of the release already.

I checked out with tag '11.2' but after building Help->About still 
shows 11.2-beta.

What tag should I use for '11.2' (final, stable release)?

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.2

Posted by Neil C Smith <ne...@apache.org>.
On Wed, 20 Nov 2019, 08:44 mlist, <ml...@riseup.net> wrote:

> Anybody? Please.
>
> On Sat, 16 Nov 2019 22:43:05 +0200 mlist wrote:
>
> > I have just rebuilt using 'release112' (just like before) hoping that
> > this will update me to the latest version 11.2 (as per GitHub and as
> > per your explanation). However Help->About of the build still shows
> > 11.2-beta2.
>

You need to change the hash on the build line to match the git hash of the
release. Or check the gitinfo.properties file in the source zip download.
In fact, building from source zip recommended. Or if not, the git tag. The
release branch is ahead of the release already.

Best wishes,

Neil

>

Re: Building NetBeans 11.2

Posted by mlist <ml...@riseup.net>.
Anybody? Please.

On Sat, 16 Nov 2019 22:43:05 +0200 mlist wrote:

> I have just rebuilt using 'release112' (just like before) hoping that
> this will update me to the latest version 11.2 (as per GitHub and as
> per your explanation). However Help->About of the build still shows
> 11.2-beta2.
> 
> Is it that the branch is not updated properly or am I doing something
> wrong?

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.2-beta1

Posted by mlist <ml...@riseup.net>.
Hi again.

On Mon, 7 Oct 2019 19:08:27 +0100 Neil C Smith wrote:

> 11.2-beta2 is a tag, release112 is the branch.  The tag is a fixed
> point, the branch will be updated for beta3 and the release itself.

I have just rebuilt using 'release112' (just like before) hoping that
this will update me to the latest version 11.2 (as per GitHub and as
per your explanation). However Help->About of the build still shows
11.2-beta2.

Is it that the branch is not updated properly or am I doing something
wrong?

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.2-beta1

Posted by Neil C Smith <ne...@apache.org>.
On Mon, 7 Oct 2019 at 19:02, mlist <ml...@riseup.net> wrote:
> What is the difference between checking out 11.2-beta2 and release112?

11.2-beta2 is a tag, release112 is the branch.  The tag is a fixed
point, the branch will be updated for beta3 and the release itself.

Neil

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.2-beta1

Posted by mlist <ml...@riseup.net>.
On Mon, 7 Oct 2019 12:34:51 +0100 Neil C Smith wrote:

> By release112 tip I mean just git checkout release112.

Thanks Neil.

What is the difference between checking out 11.2-beta2 and release112?

I notice that for both git show outputs:

commit 86e1b2eb194e8e9628a66e6ee1a128134c70671a (HEAD -> release112, tag: 11.2-beta2, origin/release112)

IOW: which naming to use long term?

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.2-beta1

Posted by Neil C Smith <ne...@apache.org>.
On Sun, 6 Oct 2019 at 13:59, mlist <ml...@riseup.net> wrote:
> On Sun, 6 Oct 2019 12:15:25 +0100 Neil C Smith wrote:
>
> > If you build from checkout you also have to manually specify the
> > metabuild branch and hash information.
>
> > Also, use release112 tip.
>
> How do I do these please?
>
> I am attaching the short bash script I use to build.

If building from a git checkout you currently need to pass in the
branch and hash info to the build.  There is a change in master (and
therefore beta3) that will try and pick up that info from the .git
folder, but still won't work with a checked out tag (because it's
detached).  The easiest way to build a (beta) release is from the
source zip download in the announcement.

To pass in the information to the build, eg. from git checkout
11.2-beta2, use git show to see the git hash.  It should match the tag
for beta2 in the file at
https://github.com/apache/netbeans-jenkins-lib/blob/master/meta/netbeansrelease.json#L134
 This file is where version info is stored.

To build 11.2-beta2 from git checkout you should then be able to run

ant -Dmetabuild.branch=release112
-Dmetabuild.hash=86e1b2eb194e8e9628a66e6ee1a128134c70671a

For beta1, check out that tag, and then pass in the different hash.

In the source zip for betas (and coming release), the above
information is stored in an extra file gitinfo.properties inside
nbbuild.

By release112 tip I mean just git checkout release112.

Hope that helps!

Best wishes,

Neil

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.2-beta1

Posted by mlist <ml...@riseup.net>.
On Sun, 6 Oct 2019 12:15:25 +0100 Neil C Smith wrote:

> If you build from checkout you also have to manually specify the
> metabuild branch and hash information.

> Also, use release112 tip.

How do I do these please?

I am attaching the short bash script I use to build.


Re: Building NetBeans 11.2-beta1

Posted by Neil C Smith <ne...@apache.org>.
On Sun, 6 Oct 2019, 11:52 mlist, <ml...@riseup.net> wrote:

> On Sat, 5 Oct 2019 18:09:09 +0100 Neil C Smith wrote:
>
> > I'm glad we will no longer have this issue in 11.2!  Try building the
> > beta of that instead, and report back problems. ;-)
>
> OK, I have built 11.2 (from GitHub's tag '11.2-beta1').
>
> Help->About shows:
>
> Apache NetBeans IDE DEV (Build 20191006-unknown-revn)
>

This is expected. Build from the source zip download. If you build from
checkout you also have to manually specify the metabuild branch and hash
information. There is some code in master now that will make a best attempt
to calculate this from a git checkout, but may only work on branch tips at
present.

Also, use release112 tip. It's current beta2. As someone pointed out
elsewhere I haven't got around to pushing the tag yet.

Best wishes,

Neil

>

Re: Building NetBeans 11.2-beta1

Posted by mlist <ml...@riseup.net>.
On Sat, 5 Oct 2019 18:09:09 +0100 Neil C Smith wrote:

> I'm glad we will no longer have this issue in 11.2!  Try building the
> beta of that instead, and report back problems. ;-)

OK, I have built 11.2 (from GitHub's tag '11.2-beta1').

Help->About shows:

Apache NetBeans IDE DEV (Build 20191006-unknown-revn)

Also running it from console shows quite a few WARNING and INFO messages.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by Neil C Smith <ne...@apache.org>.
On Sat, 5 Oct 2019 at 17:44, mlist <ml...@riseup.net> wrote:
> Something weird happened today.
...
> How come this is happening all of a sudden?
> Is there a fix? (I don't know even what these modules are)

In addition to what Geertjan said, this is actually expected
behaviour, if unfortunate.  The Oracle JS library was previously
distributed under a license we couldn't use at Apache.  At some point,
I presume you've used another build of NetBeans 11.1 (or upgraded from
earlier version) somewhere in user space?  You would have agreed to
install this library as part of enabling a feature of the IDE.

However, there was an issue with how this module was distributed, such
that it was installed in the IDE directory if writable rather than the
user directory.  This causes running another build with the same user
directory to show the message you just saw, because that dependency is
not there.  I saw it multiple times while testing for 11.1 release -
re-enabling disabled things from Plugins should fix it.

I'm glad we will no longer have this issue in 11.2!  Try building the
beta of that instead, and report back problems. ;-)

Best wishes,

Neil

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by mlist <ml...@riseup.net>.
Something weird happened today.

I simply started my freshly built netbeans 11.1 and it showed a dialog
box with the text:

"Warning - could not install some modules: Nashorn Integration - No
module providing the capability com.oracle.js.parser.implementation
could be found. 19 further modules could not be installed due to the
above problems."

I had the options to disable these modules and continue or to exit.

How come this is happening all of a sudden?
Is there a fix? (I don't know even what these modules are)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by mlist <ml...@riseup.net>.
On Sat, 5 Oct 2019 14:43:40 +0200 Geertjan Wielenga wrote:

> I don't understand them, what do they mean?

Question #3 is about warnings which I see when running the final build from command line - is there anything to do about those warnings?

Question #4 - OK, you answered that one. Thanks.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by Geertjan Wielenga <ge...@apache.org>.
I don't understand them, what do they mean? I run Ant on the command line
when building NetBeans, i.e., I simply run 'ant' in the root directory of
NetBeans. It's completely irrelevant what you set Ant to be in the Options
window -- that is only relevant to when you run Ant projects inside/from
NetBeans itself.

Gj

On Sat, Oct 5, 2019 at 2:18 PM mlist <ml...@riseup.net> wrote:

> Thanks.
> What about questions 3 and 4 please?
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
> For additional commands, e-mail: dev-help@netbeans.apache.org
>
> For further information about the NetBeans mailing lists, visit:
> https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists
>
>
>
>

Re: Building NetBeans 11.1 from source fails with compile errors

Posted by mlist <ml...@riseup.net>.
Thanks.
What about questions 3 and 4 please?

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by Geertjan Wielenga <ge...@apache.org>.
>
> 2. I did't quite understand this:
>
> On Wed, 02 Oct 2019 22:50:55 +0200 Matthias Bläsing wrote:
>
> > While netbeans can be build with JDK 9, that is not the production
> > configuration, so you could introduce dependencies on newer
> > implementations without realising it, before the problem is caught by
> > the CI pipeline.
>
> (a) What is "CI pipeline"?
> (b) What are these dependencies and what would be the negative result in
> introducing them?
>


Continuous integration pipeline:
https://builds.apache.org/job/netbeans-linux/

If you're going to write code using new language features introduced in JDK
9 or later, and then commit that to Apache NetBeans GitHub, that will be
caught by the CI pipeline since the production configuration is JDK 8.

Hope it helps,

Gj




On Sat, Oct 5, 2019 at 12:29 PM mlist <ml...@riseup.net> wrote:

> Thanks Geertjan.
>
> I hope you (or someone else) can answer the other questions too.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
> For additional commands, e-mail: dev-help@netbeans.apache.org
>
> For further information about the NetBeans mailing lists, visit:
> https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists
>
>
>
>

Re: Building NetBeans 11.1 from source fails with compile errors

Posted by mlist <ml...@riseup.net>.
Thanks Geertjan.

I hope you (or someone else) can answer the other questions too.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by Geertjan Wielenga <ge...@apache.org>.
nbbuild/netbeans is the NetBeans installation that is created when you
build Apache NetBeans. You can run it by running ‘ant tryme’ in the root of
the clone or by running the executable in nbbuild/netbeans/bin.

Hope that helps,

Gj

On Fri, 4 Oct 2019 at 22:22, mlist <ml...@riseup.net> wrote:

> John,
>
> Thanks for this info!
>
> Your process is the same I use but it seems the key which helped was:
>
> > export ANT_HOME=$HOME/opt/apache-ant-1.10.7
>
> This made the proper version of my ant (built from source) to show up
> and the build of NetBeans was successful with jdk8!
>
> Questions:
>
> 1. I notice that my old version of NetBeans has the structure which I
> see in 'nbbuild/netbeans'. Does this mean I need only this subdirectory
> (and not the whole 'nbbuild' dir)?
>
> 2. I did't quite understand this:
>
> On Wed, 02 Oct 2019 22:50:55 +0200 Matthias Bläsing wrote:
>
> > While netbeans can be build with JDK 9, that is not the production
> > configuration, so you could introduce dependencies on newer
> > implementations without realising it, before the problem is caught by
> > the CI pipeline.
>
> (a) What is "CI pipeline"?
> (b) What are these dependencies and what would be the negative result in
> introducing them?
>
> 3. Assuming the answer to 1. is 'yes' I ran the netbeans binary and it gave
> me some warnings:
>
> $/opt/netbeans/bin/netbeans
> WARNING: An illegal reflective access operation has occurred
> WARNING: Illegal reflective access by
> org.netbeans.swing.plaf.gtk.ThemeValue
> (jar:file:/opt/netbeans/platform/modules/org-netbeans-swing-plaf.jar!/) to
> method
> javax.swing.plaf.synth.SynthStyle.getColorForState(javax.swing.plaf.synth.SynthContext,javax.swing.plaf.synth.ColorType)
> WARNING: Please consider reporting this to the maintainers of
> org.netbeans.swing.plaf.gtk.ThemeValue
> WARNING: Use --illegal-access=warn to enable warnings of further illegal
> reflective access operations
> WARNING: All illegal access operations will be denied in a future release
>
> Why is that and need I do something about it?
>
> 4. In NB's options I see that Ant Home is set to
> '/opt/netbeans/extide/ant' (which is 1.10.4). Should I set it to
> /opt/ant (which is my build of 1.10.7 and the one used for building NB
> itself)?
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
> For additional commands, e-mail: dev-help@netbeans.apache.org
>
> For further information about the NetBeans mailing lists, visit:
> https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists
>
>
>
>

Re: Building NetBeans 11.1 from source fails with compile errors

Posted by mlist <ml...@riseup.net>.
John,

Thanks for this info!

Your process is the same I use but it seems the key which helped was:

> export ANT_HOME=$HOME/opt/apache-ant-1.10.7

This made the proper version of my ant (built from source) to show up
and the build of NetBeans was successful with jdk8!

Questions:

1. I notice that my old version of NetBeans has the structure which I
see in 'nbbuild/netbeans'. Does this mean I need only this subdirectory
(and not the whole 'nbbuild' dir)?

2. I did't quite understand this:

On Wed, 02 Oct 2019 22:50:55 +0200 Matthias Bläsing wrote:

> While netbeans can be build with JDK 9, that is not the production
> configuration, so you could introduce dependencies on newer
> implementations without realising it, before the problem is caught by
> the CI pipeline.

(a) What is "CI pipeline"?
(b) What are these dependencies and what would be the negative result in
introducing them?

3. Assuming the answer to 1. is 'yes' I ran the netbeans binary and it gave
me some warnings:

$/opt/netbeans/bin/netbeans
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.netbeans.swing.plaf.gtk.ThemeValue (jar:file:/opt/netbeans/platform/modules/org-netbeans-swing-plaf.jar!/) to method javax.swing.plaf.synth.SynthStyle.getColorForState(javax.swing.plaf.synth.SynthContext,javax.swing.plaf.synth.ColorType)
WARNING: Please consider reporting this to the maintainers of org.netbeans.swing.plaf.gtk.ThemeValue
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release

Why is that and need I do something about it?

4. In NB's options I see that Ant Home is set to
'/opt/netbeans/extide/ant' (which is 1.10.4). Should I set it to
/opt/ant (which is my build of 1.10.7 and the one used for building NB
itself)?

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by John Neffenger <jo...@status6.com>.
On 10/4/19 12:27 PM, Neil C Smith wrote:
> After making sure both java -version and javac -version give 8,
> cd into the netbeans directory and run `ant` as per
> http://netbeans.apache.org/download/dev/index.html
Thank you, Neil. That prompted me to figure out how I got off to such a 
bad start. It was JAVA_HOME.

I think it went like this ...

I made sure, as you suggest, to have the correct JDK and Ant versions on 
my PATH:

   $ which java; java -version
   /usr/bin/java
   openjdk version "1.8.0_222"
   OpenJDK Runtime Environment
     (build 1.8.0_222-8u222-b10-1ubuntu1~16.04.1-b10)
   OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)

   $ which javac; javac -version
   /usr/bin/javac
   javac 1.8.0_222

   $ which ant; ant -version
   /home/john/opt/apache-ant-1.10.7/bin/ant
   Apache Ant(TM) version 1.10.7 compiled on September 1 2019

But when I ran the build, I got compilation errors:

   $ ant
   ...
   [javac] 
/home/john/tmp/netbeans/nbbuild/build/langtools/src/jdk.compiler/share/classes/com/sun/tools/javac/platform/JDKPlatformProvider.java:120: 
error: reference to newFileSystem is ambiguous
   [javac] ctSym2FileSystem.put(file, fs = 
FileSystems.newFileSystem(file, null));
   [javac]                                            ^
   [javac] both method newFileSystem(Path,ClassLoader) in FileSystems 
and method newFileSystem(Path,Map<String,?>) in FileSystems match
   [javac] 4 errors
   [javac] 9 warnings

   BUILD FAILED
   ...
   Total time: 16 seconds

The compilation errors were due to Ant using the JDK commands defined by 
the JAVA_HOME environment variable instead of the ones on my PATH:

   $ ant -diagnostics | grep java.version
   ant.java.version: 13
   java.version : 13
   java.version.date : 2019-09-17

And JAVA_HOME was defined because I'm building NetBeans on my 
development workstation instead of on a separate build machine as I do 
for JavaFX and the JDK.

I think "mlist" had a similar problem with ANT_HOME, which will cause 
the "ant" command on your PATH to transform itself into whatever version 
is defined by the environment variable! Setting all these environment 
variables explicitly in a simple Bash script solved the problems for 
both of us.

In retrospect, it would be nice to have a fourth item on the page you 
linked:

   Apache NetBeans source and daily builds
   https://netbeans.apache.org/download/dev/index.html

   Building from source

   You can of course build Apache NetBeans from source. To do so:

   1. Clone the https://github.com/apache/netbeans GitHub repository.
   2. Install Oracle’s Java or Open JDK (v8, v11).
   3. Install Apache Ant 1.10 or greater (https://ant.apache.org/).
   4. Unset JAVA_HOME and ANT_HOME, or set them both appropriately.

Thanks again,
John


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by Neil C Smith <ne...@apache.org>.
On Fri, 4 Oct 2019 at 19:38, John Neffenger <jo...@status6.com> wrote:
> I also had trouble building NetBeans, even though I've been building the
> JDK and JavaFX for years. I am very new to the NetBeans source code,
> though. Here's what I did to get it working. I'm on Ubuntu 16.04.6 LTS.

People do like to complicate things! ;-)

Generally, building with distribution OpenJDK 8 and distribution Ant
works on 16.04 and 18.04 just fine for me.  After making sure both
java -version and javac -version give 8, cd into the netbeans
directory and run `ant` as per
http://netbeans.apache.org/download/dev/index.html

Don't worry about the Ant version on 16.04.  We don't even use 1.10 on
the build servers at the moment.  There were a few 1.9 that were
broken though.

Best wishes,

Neil

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by John Neffenger <jo...@status6.com>.
On 10/4/19 9:42 AM, mlist wrote:
> Even that results in the same failure.

I also had trouble building NetBeans, even though I've been building the 
JDK and JavaFX for years. I am very new to the NetBeans source code, 
though. Here's what I did to get it working. I'm on Ubuntu 16.04.6 LTS.

STEPS

(1) If you're getting persistent errors, start over with a new clone of 
the repository. For example, I switched back and forth between JDK 8 and 
JDK 11 for the build, but it seems that some remaining files -- even 
after a "clean" -- can leave the build (or me) hopelessly confused.

(2) Download the Apache Ant 1.10.7 binary archive and its signatures:

   apache-ant-1.10.7-bin.tar.gz
   apache-ant-1.10.7-bin.tar.gz.asc
   apache-ant-1.10.7-bin.tar.gz.sha512

Calm any fears of installing software not in your official Linux 
distribution by verifying the binary archive as follows:

   $ gpg --verify apache-ant-1.10.7-bin.tar.gz.asc
   ...
   gpg: Good signature from "jaikiran@apache <ja...@apache.org>"
   ...
   $ shasum -c apache-ant-1.10.7-bin.tar.gz.sha512
   apache-ant-1.10.7-bin.tar.gz: OK

Unpack the archive into "~/opt/apache-ant-1.10.7".

(3) Create the following Bash script, name it "nbbuild.sh", and put it 
in your "~/bin" directory. You will need to change the location of the 
JDK 8 directory (jdk8) on your system, and perhaps the location of 
ANT_HOME if you didn't put Ant 1.10.7 in "~/opt". The JDK 11 variables 
are commented out and are not used, so don't worry about them. Also note 
that I'm building just the "basic" cluster.

-----------
#!/bin/bash
# Builds the Apache NetBeans IDE with JDK 8 or 11
trap exit INT TERM
set -o errexit

# Ubuntu OpenJDK 8 and AdoptOpenJDK 11
jdk8="/usr/lib/jvm/java-8-openjdk-amd64"
jdk11="$HOME/opt/jdk-11.0.4+11"

# For JDK 8
export JAVA_HOME=$jdk8
options="-Dcluster.config=basic -Dnbjdk.home=$jdk8"

# For JDK 11
#export JAVA_HOME=$jdk11
#options="-Dcluster.config=basic -Dnbjdk.home=$jdk11 \
#    -Dpermit.jdk9.builds=true"

# Calls Apache Ant
export ANT_HOME=$HOME/opt/apache-ant-1.10.7
export PATH=$ANT_HOME/bin:$JAVA_HOME/bin:/usr/sbin:/usr/bin:/sbin:/bin
java -version
ant -version
echo ant $options $@
-----------

(4) You should see the following output when you run the script:

-----------
openjdk version "1.8.0_222"
OpenJDK Runtime Environment (build 1.8.0_222-8u222-b10-1ubuntu1~16.04.1-b10)
OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)
Apache Ant(TM) version 1.10.7 compiled on September 1 2019
ant -Dcluster.config=basic -Dnbjdk.home=/usr/lib/jvm/java-8-openjdk-amd64
-----------

(5) Now remove the "echo" from the last line of the script file. Change 
to the root of a clean NetBeans Git repository and try building with:

$ ~/bin/nbbuild.sh

RESULT

Those steps build NetBeans successfully for me with JDK 8, which was 
good enough to let me even debug a NetBeans module.

REMAINING MYSTERIES

(1) When I try the same script with JDK 11, I get a compilation error:

[javac] 
/home/john/tmp/github/jgneff/netbeans/ide/db/build/fake-jdbc-40/src/java/sql/RowIdLifetime.java:2: 
error: package exists in another module: java.sql

(2) When I pull the NetBeans Build System project (netbeans/nbbuild) 
into the latest NetBeans 11.2, I get one file flagged with an error:

File: nbbuild/antsrc/org/netbeans/nbbuild/ReleaseJsonProperties.java
Error: package org.json.simple does not exist

(3) I'm not sure when or how the file "nbbuild/user.build.properties" is 
created. Sometimes I get the file, other times I don't, and I'm not sure 
whether it matters.

(4) I also created the file "~/.nbbuild.properties" with the following 
properties, just in case, but I don't think it's necessary if the 
properties are defined on the "ant" command line.

-----------
# For JDK 8
nbjdk.home=/usr/lib/jvm/java-8-openjdk-amd64

# For JDK 11
# nbjdk.home=/home/john/opt/jdk-11.0.4+11
# permit.jdk9.builds=true
-----------

John

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by mlist <ml...@riseup.net>.
On Fri, 4 Oct 2019 17:23:02 +0100 Neil C Smith wrote:

> Basically do
> 
> sudo update-alternatives --display java
> sudo update-alternatives --display javac
> 
> to see what's available, and
> 
> sudo update-alternatives --config java
> sudo update-alternatives --config javac
> 
> to select which java and javac are picked up on the path.  Make sure
> both are set to 8.  Then just try with system ant and default sources.

Even that results in the same failure.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by Neil C Smith <ne...@apache.org>.
On Thu, 3 Oct 2019 at 18:37, mlist <ml...@riseup.net> wrote:
>
> On Thu, 3 Oct 2019 18:27:54 +0100 Neil C Smith wrote:
>
> > If Suse has update-alternatives or equivalent, you might be better
> > setting up java and javac with that instead?
>
> What is 'update-alternatives'?
>

It's this https://software.opensuse.org/package/update-alternatives

Example I found of using on Suse here -
https://forums.opensuse.org/showthread.php/512626-Java-8-jdk-and-update-alternatives-on-Leap

Basically do

sudo update-alternatives --display java
sudo update-alternatives --display javac

to see what's available, and

sudo update-alternatives --config java
sudo update-alternatives --config javac

to select which java and javac are picked up on the path.  Make sure
both are set to 8.  Then just try with system ant and default sources.

YMMV - works on Ubuntu anyway.

Best wishes,

Neil

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by mlist <ml...@riseup.net>.
On Thu, 3 Oct 2019 18:27:54 +0100 Neil C Smith wrote:

> If Suse has update-alternatives or equivalent, you might be better
> setting up java and javac with that instead?

What is 'update-alternatives'?

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by Neil C Smith <ne...@apache.org>.
On Thu, 3 Oct 2019 at 18:19, mlist <ml...@riseup.net> wrote:
> Yes, but also later on after changing JAVA_HOME the build still fails.

If Suse has update-alternatives or equivalent, you might be better
setting up java and javac with that instead?

Best wishes,

Neil

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by Geertjan Wielenga <ge...@apache.org>.
What is JAVA_HOME now set to?

Gj

On Thu, 3 Oct 2019 at 19:19, mlist <ml...@riseup.net> wrote:

> On Thu, 3 Oct 2019 18:51:53 +0200 Geertjan Wielenga wrote:
>
> > I copied and pasted the error messages from your stack trace which tells
> > you explicitly that your JAVA_HOME is pointing to the JRE instead of the
> > JDK.
>
> Yes, but also later on after changing JAVA_HOME the build still fails.
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
> For additional commands, e-mail: dev-help@netbeans.apache.org
>
> For further information about the NetBeans mailing lists, visit:
> https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists
>
>
>
>

Re: Building NetBeans 11.1 from source fails with compile errors

Posted by mlist <ml...@riseup.net>.
On Thu, 3 Oct 2019 18:51:53 +0200 Geertjan Wielenga wrote:

> I copied and pasted the error messages from your stack trace which tells
> you explicitly that your JAVA_HOME is pointing to the JRE instead of the
> JDK.

Yes, but also later on after changing JAVA_HOME the build still fails.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by Geertjan Wielenga <ge...@apache.org>.
On Thu, 3 Oct 2019 at 18:49, mlist <ml...@riseup.net> wrote:

> Geertjan,
>
> I can't understand what you mean by this and how I am supposed to use
> it. Could you kindly clarify please?
>


I copied and pasted the error messages from your stack trace which tells
you explicitly that your JAVA_HOME is pointing to the JRE instead of the
JDK.

Gj






> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
> For additional commands, e-mail: dev-help@netbeans.apache.org
>
> For further information about the NetBeans mailing lists, visit:
> https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists
>
>
>
>

Re: Building NetBeans 11.1 from source fails with compile errors

Posted by mlist <ml...@riseup.net>.
Geertjan,

I can't understand what you mean by this and how I am supposed to use
it. Could you kindly clarify please?

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by Geertjan Wielenga <ge...@apache.org>.
   1. BUILD FAILED
   2. /tmp/download/src/nbbuild/build.xml:59: Unable to find a javac
   compiler;
   3. com.sun.tools.javac.Main is not on the classpath.
   4. Perhaps JAVA_HOME does not point to the JDK.
   5. It is currently set to "/usr/lib64/jvm/java-1.8.0-openjdk-1.8.0/jre"


On Thu, Oct 3, 2019 at 2:26 PM mlist <ml...@riseup.net> wrote:

> On Wed, 2 Oct 2019 23:06:58 +0300 mlist wrote:
>
> > Hi,
> >
> > I am trying to build NetBeans 11.1 from source code on openSUSE Leap
> > 15. I have read the instructions and the README.md but unfortunately I
> > am getting errors which I don't know how to fix as I am not a Java
> > developer (my intention is to use NetBeans IDE for PHP, CSS,
> > JavaScript).
> >
> > Here is the output I am getting:
> >
> > https://susepaste.org/673abb68
> >
> > What should I do to make this work please?
>
> Can anyone please help?
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
> For additional commands, e-mail: dev-help@netbeans.apache.org
>
> For further information about the NetBeans mailing lists, visit:
> https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists
>
>
>
>

Re: Apache Solr on FreeBSD

Posted by Jan Høydahl <ja...@cominvent.com>.
I don't think we have official nightly tests on FreeBSD, and no install-script support. 
So the risk is that e.g. bin/solr start script may be more likely to have bugs than on linux.
However, many devs use macOS which is BSD based, and the start scripts support that...
As long as you have a current Java JRE, you should be fine with the java parts.

Seems like there is an official "port" for solr at https://www.freshports.org/textproc/apache-solr/ <https://www.freshports.org/textproc/apache-solr/> to make up for the lack of install script. Have not tested it though.

Jan

> 7. apr. 2022 kl. 07:06 skrev Sam Lee <sa...@yahoo.com.INVALID>:
> 
> Does Apache Solr work out of the box on FreeBSD? Does the Solr project
> officially support running Solr on FreeBSD? Or is Solr only for Linux,
> macOS, and Windows?


Re: [PATCH] Cleanup wc in the svn.wc python binding test script

Posted by Garrett Rooney <ro...@electricjellyfish.net>.
On 6/17/06, Madan U Sreenivasan <ma...@collab.net> wrote:

> A gentle reminder to the committers... am dependant on part of this patch
> for my next patch too...

For the benefit of anyone else working through their email backlog, it
looks like Jelmer has committed a version of this already.

-garrett

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: apache svn server memory usage?

Posted by Justin Erenkrantz <ju...@erenkrantz.com>.
On Tue, Jul 01, 2003 at 12:18:15AM -0700, Chris Hecker wrote:
> I'm actually trying to figure out why svn seems slow (slow is defined as 
> relatively high latency and low throughput...each command takes a while to 
> run and doesn't move data very fast across the network), and I thought I'd 
> start with this.  However, according to atsar the machine's not paging at 

You probably don't have the streamy PROPFIND responses in mod_dav that
Ben just added.

These patches have been approved and will be in 2.0.47 - you can also
use APACHE_2_0_BRANCH from httpd-2.0's CVS until then.  -- justin 

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Attention list owners

Posted by "C. Michael Pilato" <cm...@collab.net>.
"Jan Hendrik" <ja...@bigfoot.com> writes:

> Could the owners of the list remove me from this, that is from all 
> lists hosted on tigris.org, please? 

What address is subscribed?  I don't see any obvious matches in the
list of subscribers on dev@s.t.o or user@s.t.o, and Fitz claims he has
to moderate your mails through.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: problems with ojb 0.9.8

Posted by Armin Waibel <ar...@code-au-lait.de>.
Damned! A hard nut to crack ;-)
Now it seems to be a field configuration or a database problem.
In both cases I'm not a expert.

<snip>
/*
     * @see Platform#setObject(PreparedStatement, int, Object, int)
     */
    public void setObjectForStatement(PreparedStatement ps, int index,
Object value, int sqlType)
            throws SQLException
    {
        if ((value instanceof String) && (sqlType == Types.LONGVARCHAR))
        {
            String s = (String) value;
            ps.setCharacterStream(index, new StringReader(s),
s.length());
        }
        else
        {
            ps.setObject(index, value, sqlType);
        }
    }

setCharacterStream was called when LONGVARCHAR was used.
Did you use the latest version of hsql? I think in current version there
should be LONGVARCHAR fields supported.
Maybe take a look in the repository_junit.xml for alternatives field
type.
Sorry, I couldn't help more.

regards,
Armin

----- Original Message -----
From: "David Forslund" <dw...@lanl.gov>
To: "OJB Users List" <oj...@jakarta.apache.org>
Sent: Monday, December 30, 2002 10:25 PM
Subject: Re: problems with ojb 0.9.8


> At 09:25 PM 12/30/2002 +0100, Armin Waibel wrote:
> >Hi again,
> >
> > > > > >
> > > > > > > I see what the problem is, but am not sure what the
solution
> >is.
> > > > > > >
> > > > > > > I have a an abstract class that is implemented with a
number
> >of
> > > > > >classes.
> > > > > > > I'm trying to create a unique key for an instance class,
but
> >when
> > > >I
> > > > > > > check there are no field descriptors for the base class.
> > > > > >
> > > > > >Have you tried
> > > > > >Class realClass = abstractBaseClass.getClass();
> > > > > >ClassDescriptor cld = broker.getClassDescriptor(realClass);
> > > > > >to get the real class descriptor? Then it should possible to
get
> >the
> > > > > >field.
> > > > >
> > > > > This doesn't help because I'm just calling the getUniqueId
within
> >OJB
> > > > > and I don't have any control over what it does except through
> > > > > the repository.
> > > >
> > > >
> > > >I do not understand this. You declare your 'valueId' as a
> >autoincrement
> > > >field, but in your stack trace it seems you do a direct call
> > > >PB.getUniqueId?
> > >
> > > Well I did add this because 0.9.8 was complaining about this field
> >being
> > > absent.  I have removed it without any change in the behavior.
> > >
> > >
> > > > > >
> > > >
> >
>>>org.apache.ojb.broker.singlevm.DelegatingPersistenceBroker.getUniqueI
> > > > > > > >>> >
> > > > >
> >
>gov.lanl.Database.OJBDatabaseMgr.getNextSeq(OJBDatabaseMgr.java:582)
> > > > > > > >>> >          at
> > > > > > > >>> >
> > > > > > >
> > > > > >
> > > >
> >
>>>gov.lanl.COAS.AbstractObservationValue_.<init>(AbstractObservationVal
> > > >
> > > >Could you post a code snip to ease my understanding?
> > > >But by the way this seems to be a bug.
> > >
> > > I'm not sure what you mean by a code snippet.  When I call the
class
> > > constructor,
> > > I call getUniqueId with the class name and attribute:
> > >
> > > This is the specific method I call .
> > > /**
> > >       * return next number in a persistent sequence
> > >       */
> > >      public long getNextSeq(Class clazz, String fieldName) {
> > >          cat.debug("getNextSeq: "+clazz.getName() + "
"+fieldName);
> > >          // return sequenceManager.getUniqueId(clazz, fieldName);
> > >          try {
> >                 // get the CLD for the base class
> >                 ClassDescriptor cld =
broker.getClassDescriptor(clazz);
> >                 if( (cld.isAbstract || cld.isInterface()) &&
> >cld.isExtent())
> >                 {
> >                     // get the first found extent class
> >                       clazz = cld.getExtentClasses().get(0) // we
grap
> >the first
> >                 }
> >
> > >              return broker.getUniqueId(clazz, fieldName);
> > >          } catch (org.apache.ojb.broker.PersistenceBrokerException
e)
> >{
> > >              cat.error("Can't get ID from broker: " +
clazz.getName()
> >+ " "
> > > + fieldName, e);
> > >
> > >              // System.exit(1);
> > >              return 0;
> > >          }
> > >      }
> > >
> >Maybe this could be a workaround for your problem.
> >Keep in mind that getUniqueId(clazz, fieldName) was deprecated
> >and will be replaced by getUniqueId(FieldDescriptor field).
> >
> >What I don't understand is, why you need a getNextSeq method,
> >when you define autoincrement fields? OJB does all sequence key
> >generation automatic for you.
>
> It didn't use to.  Plus we allow for other OR mapping tools and
> need some level of control of ids, independent of the tool.
>
> OK.  I made this change and that problem went away, but another
> one came up.  I now get an error from hsqldb that the requested
function
> is not supported:
>
> [org.apache.ojb.broker.accesslayer.JdbcAccess] ERROR: SQLException
during
> the ex
> ecution of the insert (for a gov.lanl.COAS.String_): This function is
not
> supported
> This function is not supported
> java.sql.SQLException: This function is not supported
>          at org.hsqldb.Trace.getError(Trace.java:180)
>          at org.hsqldb.Trace.getError(Trace.java:144)
>          at org.hsqldb.Trace.error(Trace.java:192)
>          at
> org.hsqldb.jdbcPreparedStatement.getNotSupported(jdbcPreparedStatemen
> t.java:1602)
>          at
>
org.hsqldb.jdbcPreparedStatement.setCharacterStream(jdbcPreparedStatemen
t.java:1375)
>          at
>
org.apache.ojb.broker.platforms.PlatformDefaultImpl.setObjectForStatemen
t(PlatformDefaultImpl.java:216)
>          at
>
org.apache.ojb.broker.accesslayer.StatementManager.bindInsert(StatementM
anager.java:487)
>          at
>
org.apache.ojb.broker.accesslayer.JdbcAccess.executeInsert(JdbcAccess.ja
va:194)
>          at
>
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.storeToDb(Persisten
ceBrokerImpl.java:1966)
>          at
> org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(Persistence
BrokerImpl.java:1905)
>          at
>
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBr
okerImpl.java:614)
>          at
>
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.storeReferences(Per
sistenceBrokerImpl.java:641)
>          at
>
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.storeToDb(Persisten
ceBrokerImpl.java:1938)
>          at
>
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBr
okerImpl.java:1905)
>          at
>
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBr
okerImpl.java:614)
>          at
>
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.storeCollectionObje
ct(PersistenceBrokerImpl.java:789)
>          at
>
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.storeCollections(Pe
rsistenceBrokerImpl.java:769)
>          at
>
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.storeToDb(Persisten
ceBrokerImpl.java:1989)
>          at
>
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBr
okerImpl.java:1905)
>          at
>
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBr
okerImpl.java:614)
>          at
>
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBr
okerImpl.java:588)
>          at
>
org.apache.ojb.broker.singlevm.DelegatingPersistenceBroker.store(Delegat
ingPersistenceBroker.java:123)
>          at
> gov.lanl.Database.OJBDatabaseMgr.insertElement(OJBDatabaseMgr.java:30
>
> Thanks,
>
> Dave
>
> >HTH
> >regards,
> >Armin
> >
> > > thanks,
> > >
> > > Dave
> > >
> > >
> > > >Armin
> > > >
> > > >
> > > > >
> > > > > >Or define your base class with all fields in the repository
file
> >and
> > > > > >declare
> > > > > >all extent-classes in the class-descriptor. Then the default
> >sequence
> > > > > >manager implementations should be able to generate a id
unique
> > > > > >across all extents.
> > > > > >Or define only the abstract class with all extent-classes,
then
> >you
> > > > > >should be
> > > > > >able to get one of the extent classes.
> > > > >
> > > > > This is how I have it defined in my repository_user.xml
> > > > >
> > > > >    <class-descriptor class="gov.lanl.COAS.ObservationValue_">
> > > > >      <extent-class class-ref="gov.lanl.COAS.Multimedia_"/>
> > > > >      <extent-class class-ref="gov.lanl.COAS.NoInformation_"/>
> > > > >      <extent-class class-ref="gov.lanl.COAS.Numeric_"/>
> > > > >      <extent-class class-ref="gov.lanl.COAS.ObservationId_"/>
> > > > >      <extent-class
class-ref="gov.lanl.COAS.QualifiedCodeInfo_"/>
> > > > >      <extent-class
class-ref="gov.lanl.COAS.QualifiedPersonId_"/>
> > > > >      <extent-class class-ref="gov.lanl.COAS.Range_"/>
> > > > >      <extent-class class-ref="gov.lanl.COAS.String_"/>
> > > > >      <extent-class class-ref="gov.lanl.COAS.TimeSpan_"/>
> > > > >      <extent-class
> > > >class-ref="gov.lanl.COAS.UniversalResourceIdentifier_"/>
> > > > >      <extent-class class-ref="gov.lanl.COAS.Empty_"/>
> > > > >    </class-descriptor>
> > > > >
> > > > > and an example for one of the extent classes
> > > > >
> > > > >   <class-descriptor
> > > > >         isolation-level="read-uncommitted"
> > > > >         class="gov.lanl.COAS.Empty_"
> > > > >         table="OjbEmpty_"
> > > > >   >
> > > > >     <field-descriptor id="1"
> > > > >         name="valueId"
> > > > >         jdbc-type="INTEGER"
> > > > >         column="valueId"
> > > > >         primarykey="true"
> > > > >         autoincrement="true"
> > > > >     />
> > > > >
> > > > >   </class-descriptor>
> > > > >
> > > > > there is no table for the ObservationValue_ class because it
is an
> > > >Abstract
> > > > > Class.
> > > > > this is what I've been using for 0.9.7 and it works fine.
this
> >fails
> > > >under
> > > > > 0.9.8
> > > > > when trying to get a uniqueid for each of the extent classes.
I
> >think
> > > >this
> > > > > is what
> > > > > you are describing in your last suggestion.
> > > > >
> > > > > thanks,
> > > > > Dave
> > > > >
> > > > >
> > > > > >HTH
> > > > > >regards,
> > > > > >Armin
> > > > > >
> > > > > > >
> > > > > > > This all worked fine in 0.9.7, but perhaps there has been
some
> > > >change
> > > > > > > in the semantics?  We put the necessary table elements in
each
> > > > > >instance
> > > > > > > of the class but not in the table for the base class
(which
> > > >actually
> > > > > > > doesn't exist).
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > > Dave
> > > > > > >
> > > > > > > At 11:35 AM 12/30/2002 -0700, David Forslund wrote:
> > > > > > > >When I put a check inside of the getFieldDescriptor, I
find
> >that
> > > >it
> > > > > >is
> > > > > > > >being called by HighLowSequence
> > > > > > > >with the argument ojbConcreteClass and is returning a
null
> >for
> > > >the
> > > > > > > >field.  Is this what is expected?
> > > > > > > >
> > > > > > > >Dave
> > > > > > > >
> > > > > > > >At 10:43 AM 12/30/2002 -0700, David Forslund wrote:
> > > > > > > >>It wasn't null in my code that called the OJB code.
This
> >code
> > > >has
> > > > > >been
> > > > > > > >>working fine in 0.9.7.    If the xml needed to change
for
> >some
> > > > > >reason,
> > > > > > > >>it might have caused this.  I'm passing in a string of a
> > > >variable
> > > > > >that
> > > > > > > >>is defined in my table.   Whether OJB properly connects
a
> > > >"Field"
> > > > > > > >>to that table is where the problem may be.   It did in
the
> >past
> > > > > >without
> > > > > > > >>any problem.   I have a hard time telling exactly what
> >changed
> > > > > >between
> > > > > > > >>these two versions.
> > > > > > > >>
> > > > > > > >>Thanks,
> > > > > > > >>
> > > > > > > >>Dave
> > > > > > > >>At 01:49 PM 12/30/2002 +0100, Armin Waibel wrote:
> > > > > > > >>>Hi David,
> > > > > > > >>>
> > > > > > > >>>the sequence generator implementation now only generate
> > > > > > > >>>id's for fields declared in the repository.
> > > > > > > >>>I think you got this NullPointerException, because SM
get a
> > > > > > > >>>'null' field:
> > > > > > > >>>
> > > > > > > >>><snip SequenceManagerHelper>
> > > > > > > >>>public static String buildSequenceName(
> > > > > > > >>>PersistenceBroker brokerForClass, FieldDescriptor
field)
> > > > > > > >>>     {
> > > > > > > >>>48--->!!! ClassDescriptor cldTargetClass =
> > > > > >field.getClassDescriptor();
> > > > > > > >>>                 String seqName =
field.getSequenceName();
> > > > > > > >>>.....
> > > > > > > >>></snip>
> > > > > > > >>>
> > > > > > > >>>So check your code if the given FiledDescriptor wasn't
> >null.
> > > > > > > >>>
> > > > > > > >>>HTH
> > > > > > > >>>
> > > > > > > >>>regards,
> > > > > > > >>>Armin
> > > > > > > >>>
> > > > > > > >>>----- Original Message -----
> > > > > > > >>>From: "David Forslund" <dw...@lanl.gov>
> > > > > > > >>>To: "OJB Users List" <oj...@jakarta.apache.org>
> > > > > > > >>>Sent: Monday, December 30, 2002 1:33 AM
> > > > > > > >>>Subject: Re: problems with ojb 0.9.8
> > > > > > > >>>
> > > > > > > >>>
> > > > > > > >>> > I'm trying to upgrade from 0.9.7 to 0.9.8 and am
having
> >some
> > > > > >problems
> > > > > > > >>>that
> > > > > > > >>> > I don't understand yet.
> > > > > > > >>> >
> > > > > > > >>> > I'm getting the warning about not finding an
> >autoincrement
> > > > > >attribute
> > > > > > > >>>for a
> > > > > > > >>> > class.  I'm not sure when
> > > > > > > >>> > I have to have an autoincrement attribute, but the
> > > >primarykey
> > > > > >for the
> > > > > > > >>>class
> > > > > > > >>> > I'm using is a varchar
> > > > > > > >>> > so that autoincrement doesn't seem appropriate.
> > > > > > > >>> >
> > > > > > > >>> > Subsequently, I get an null pointer exception error
in
> >the
> > > > > > > >>> > SequenceManagerHelper that I don't understand:
> > > > > > > >>> > java.lang.NullPointerException
> > > > > > > >>> >          at
> > > > > > > >>> >
> > > > > > >
> > > > > >
> > > >
> >
>>>org.apache.ojb.broker.util.sequence.SequenceManagerHelper.buildSequen
> > > > > >ceN
> > > > > > > >>>ame(SequenceManagerHelper.java:48)
> > > > > > > >>> >          at
> > > > > > > >>> >
> > > > > > >
> > > > > >
> > > >
> >
>>>org.apache.ojb.broker.util.sequence.SequenceManagerHiLoImpl.getUnique
> > > > > >Id(
> > > > > > > >>>SequenceManagerHiLoImpl.java:49)
> > > > > > > >>> >          at
> > > > > > > >>> >
> > > > > > >
> > > > > >
> > > >
> >
>>>org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.getUniqueId(Pers
> > > > > >ist
> > > > > > > >>>enceBrokerImpl.java:2258)
> > > > > > > >>> >          at
> > > > > > > >>> >
> > > > > > >
> > > > > >
> > > >
> >
>>>org.apache.ojb.broker.singlevm.DelegatingPersistenceBroker.getUniqueI
> > > > > >d(D
> > > > > > > >>>elegatingPersistenceBroker.java:242)
> > > > > > > >>> >          at
> > > > > > > >>> >
> > > > >
> >
>gov.lanl.Database.OJBDatabaseMgr.getNextSeq(OJBDatabaseMgr.java:582)
> > > > > > > >>> >          at
> > > > > > > >>> >
> > > > > > >
> > > > > >
> > > >
> >
>>>gov.lanl.COAS.AbstractObservationValue_.<init>(AbstractObservationVal
> > > > > >ue_
> > > > > > > >>>.java:54)
> > > > > > > >>> >          at
gov.lanl.COAS.Empty_.<init>(Empty_.java:31)
> > > > > > > >>> >
> > > > > > > >>> > I'm pretty sure that it is being called correctly
from
> >my
> > > >code
> > > > > >(which
> > > > > > > >>>works
> > > > > > > >>> > fine in 0.9.7), but it is failing now.
> > > > > > > >>> >
> > > > > > > >>> > An unrelated warning in a different application is
that
> >OJB
> > > >says
> > > > > >I
> > > > > > > >>>should
> > > > > > > >>> > use addLike() for using LIKE, but it
> > > > > > > >>> > seems to use the right code anyway.  Is this just a
> > > >deprecation
> > > > > >issue?
> > > > > > > >>>I
> > > > > > > >>> > don't see why it bothers
> > > > > > > >>> > to tell me this, if it can figure what to do anyway.
> > > > > > > >>> >
> > > > > > > >>> > Thanks,
> > > > > > > >>> >
> > > > > > > >>> > Dave
> > > > > > > >>> >
> > > > > > > >>> >
> > > > > > > >>> > --
> > > > > > > >>> > To unsubscribe, e-mail:
> > > > > > > >>><ma...@jakarta.apache.org>
> > > > > > > >>> > For additional commands, e-mail:
> > > > > > > >>><ma...@jakarta.apache.org>
> > > > > > > >>> >
> > > > > > > >>> >
> > > > > > > >>> >
> > > > > > > >>>
> > > > > > > >>>
> > > > > > > >>>--
> > > > > > > >>>To unsubscribe, e-mail:
> > > > > ><ma...@jakarta.apache.org>
> > > > > > > >>>For additional commands, e-mail:
> > > > > ><ma...@jakarta.apache.org>
> > > > > > > >>
> > > > > > > >>
> > > > > > > >>--
> > > > > > > >>To unsubscribe, e-mail:
> > > > > ><ma...@jakarta.apache.org>
> > > > > > > >>For additional commands, e-mail:
> > > > > ><ma...@jakarta.apache.org>
> > > > > > > >
> > > > > > > >
> > > > > > > >--
> > > > > > > >To unsubscribe, e-mail:
> > > > > ><ma...@jakarta.apache.org>
> > > > > > > >For additional commands, e-mail:
> > > > > ><ma...@jakarta.apache.org>
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > To unsubscribe, e-mail:
> > > > > ><ma...@jakarta.apache.org>
> > > > > > > For additional commands, e-mail:
> > > > > ><ma...@jakarta.apache.org>
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > >
> > > > > >
> > > > > >--
> > > > > >To unsubscribe, e-mail:
> > > ><ma...@jakarta.apache.org>
> > > > > >For additional commands, e-mail:
> > > ><ma...@jakarta.apache.org>
> > > > >
> > > > >
> > > > > --
> > > > > To unsubscribe, e-mail:
> > > ><ma...@jakarta.apache.org>
> > > > > For additional commands, e-mail:
> > > ><ma...@jakarta.apache.org>
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >--
> > > >To unsubscribe, e-mail:
> ><ma...@jakarta.apache.org>
> > > >For additional commands, e-mail:
> ><ma...@jakarta.apache.org>
> > >
> > >
> > > --
> > > To unsubscribe, e-mail:
> ><ma...@jakarta.apache.org>
> > > For additional commands, e-mail:
> ><ma...@jakarta.apache.org>
> > >
> > >
> > >
> >
> >
> >--
> >To unsubscribe, e-mail:
<ma...@jakarta.apache.org>
> >For additional commands, e-mail:
<ma...@jakarta.apache.org>
>
>
> --
> To unsubscribe, e-mail:
<ma...@jakarta.apache.org>
> For additional commands, e-mail:
<ma...@jakarta.apache.org>
>
>
>


Re: problems with ojb 0.9.8

Posted by David Forslund <dw...@lanl.gov>.
At 09:25 PM 12/30/2002 +0100, Armin Waibel wrote:
>Hi again,
>
> > > > >
> > > > > > I see what the problem is, but am not sure what the solution
>is.
> > > > > >
> > > > > > I have a an abstract class that is implemented with a number
>of
> > > > >classes.
> > > > > > I'm trying to create a unique key for an instance class, but
>when
> > >I
> > > > > > check there are no field descriptors for the base class.
> > > > >
> > > > >Have you tried
> > > > >Class realClass = abstractBaseClass.getClass();
> > > > >ClassDescriptor cld = broker.getClassDescriptor(realClass);
> > > > >to get the real class descriptor? Then it should possible to get
>the
> > > > >field.
> > > >
> > > > This doesn't help because I'm just calling the getUniqueId within
>OJB
> > > > and I don't have any control over what it does except through
> > > > the repository.
> > >
> > >
> > >I do not understand this. You declare your 'valueId' as a
>autoincrement
> > >field, but in your stack trace it seems you do a direct call
> > >PB.getUniqueId?
> >
> > Well I did add this because 0.9.8 was complaining about this field
>being
> > absent.  I have removed it without any change in the behavior.
> >
> >
> > > > >
> > >
> >>>org.apache.ojb.broker.singlevm.DelegatingPersistenceBroker.getUniqueI
> > > > > > >>> >
> > > >
> >gov.lanl.Database.OJBDatabaseMgr.getNextSeq(OJBDatabaseMgr.java:582)
> > > > > > >>> >          at
> > > > > > >>> >
> > > > > >
> > > > >
> > >
> >>>gov.lanl.COAS.AbstractObservationValue_.<init>(AbstractObservationVal
> > >
> > >Could you post a code snip to ease my understanding?
> > >But by the way this seems to be a bug.
> >
> > I'm not sure what you mean by a code snippet.  When I call the class
> > constructor,
> > I call getUniqueId with the class name and attribute:
> >
> > This is the specific method I call .
> > /**
> >       * return next number in a persistent sequence
> >       */
> >      public long getNextSeq(Class clazz, String fieldName) {
> >          cat.debug("getNextSeq: "+clazz.getName() + " "+fieldName);
> >          // return sequenceManager.getUniqueId(clazz, fieldName);
> >          try {
>                 // get the CLD for the base class
>                 ClassDescriptor cld = broker.getClassDescriptor(clazz);
>                 if( (cld.isAbstract || cld.isInterface()) &&
>cld.isExtent())
>                 {
>                     // get the first found extent class
>                       clazz = cld.getExtentClasses().get(0) // we grap
>the first
>                 }
>
> >              return broker.getUniqueId(clazz, fieldName);
> >          } catch (org.apache.ojb.broker.PersistenceBrokerException e)
>{
> >              cat.error("Can't get ID from broker: " + clazz.getName()
>+ " "
> > + fieldName, e);
> >
> >              // System.exit(1);
> >              return 0;
> >          }
> >      }
> >
>Maybe this could be a workaround for your problem.
>Keep in mind that getUniqueId(clazz, fieldName) was deprecated
>and will be replaced by getUniqueId(FieldDescriptor field).
>
>What I don't understand is, why you need a getNextSeq method,
>when you define autoincrement fields? OJB does all sequence key
>generation automatic for you.

It didn't use to.  Plus we allow for other OR mapping tools and
need some level of control of ids, independent of the tool.

OK.  I made this change and that problem went away, but another
one came up.  I now get an error from hsqldb that the requested function
is not supported:

[org.apache.ojb.broker.accesslayer.JdbcAccess] ERROR: SQLException during 
the ex
ecution of the insert (for a gov.lanl.COAS.String_): This function is not 
supported
This function is not supported
java.sql.SQLException: This function is not supported
         at org.hsqldb.Trace.getError(Trace.java:180)
         at org.hsqldb.Trace.getError(Trace.java:144)
         at org.hsqldb.Trace.error(Trace.java:192)
         at 
org.hsqldb.jdbcPreparedStatement.getNotSupported(jdbcPreparedStatemen
t.java:1602)
         at 
org.hsqldb.jdbcPreparedStatement.setCharacterStream(jdbcPreparedStatement.java:1375)
         at 
org.apache.ojb.broker.platforms.PlatformDefaultImpl.setObjectForStatement(PlatformDefaultImpl.java:216)
         at 
org.apache.ojb.broker.accesslayer.StatementManager.bindInsert(StatementManager.java:487)
         at 
org.apache.ojb.broker.accesslayer.JdbcAccess.executeInsert(JdbcAccess.java:194)
         at 
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.storeToDb(PersistenceBrokerImpl.java:1966)
         at 
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBrokerImpl.java:1905)
         at 
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBrokerImpl.java:614)
         at 
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.storeReferences(PersistenceBrokerImpl.java:641)
         at 
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.storeToDb(PersistenceBrokerImpl.java:1938)
         at 
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBrokerImpl.java:1905)
         at 
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBrokerImpl.java:614)
         at 
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.storeCollectionObject(PersistenceBrokerImpl.java:789)
         at 
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.storeCollections(PersistenceBrokerImpl.java:769)
         at 
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.storeToDb(PersistenceBrokerImpl.java:1989)
         at 
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBrokerImpl.java:1905)
         at 
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBrokerImpl.java:614)
         at 
org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.store(PersistenceBrokerImpl.java:588)
         at 
org.apache.ojb.broker.singlevm.DelegatingPersistenceBroker.store(DelegatingPersistenceBroker.java:123)
         at 
gov.lanl.Database.OJBDatabaseMgr.insertElement(OJBDatabaseMgr.java:30

Thanks,

Dave

>HTH
>regards,
>Armin
>
> > thanks,
> >
> > Dave
> >
> >
> > >Armin
> > >
> > >
> > > >
> > > > >Or define your base class with all fields in the repository file
>and
> > > > >declare
> > > > >all extent-classes in the class-descriptor. Then the default
>sequence
> > > > >manager implementations should be able to generate a id unique
> > > > >across all extents.
> > > > >Or define only the abstract class with all extent-classes, then
>you
> > > > >should be
> > > > >able to get one of the extent classes.
> > > >
> > > > This is how I have it defined in my repository_user.xml
> > > >
> > > >    <class-descriptor class="gov.lanl.COAS.ObservationValue_">
> > > >      <extent-class class-ref="gov.lanl.COAS.Multimedia_"/>
> > > >      <extent-class class-ref="gov.lanl.COAS.NoInformation_"/>
> > > >      <extent-class class-ref="gov.lanl.COAS.Numeric_"/>
> > > >      <extent-class class-ref="gov.lanl.COAS.ObservationId_"/>
> > > >      <extent-class class-ref="gov.lanl.COAS.QualifiedCodeInfo_"/>
> > > >      <extent-class class-ref="gov.lanl.COAS.QualifiedPersonId_"/>
> > > >      <extent-class class-ref="gov.lanl.COAS.Range_"/>
> > > >      <extent-class class-ref="gov.lanl.COAS.String_"/>
> > > >      <extent-class class-ref="gov.lanl.COAS.TimeSpan_"/>
> > > >      <extent-class
> > >class-ref="gov.lanl.COAS.UniversalResourceIdentifier_"/>
> > > >      <extent-class class-ref="gov.lanl.COAS.Empty_"/>
> > > >    </class-descriptor>
> > > >
> > > > and an example for one of the extent classes
> > > >
> > > >   <class-descriptor
> > > >         isolation-level="read-uncommitted"
> > > >         class="gov.lanl.COAS.Empty_"
> > > >         table="OjbEmpty_"
> > > >   >
> > > >     <field-descriptor id="1"
> > > >         name="valueId"
> > > >         jdbc-type="INTEGER"
> > > >         column="valueId"
> > > >         primarykey="true"
> > > >         autoincrement="true"
> > > >     />
> > > >
> > > >   </class-descriptor>
> > > >
> > > > there is no table for the ObservationValue_ class because it is an
> > >Abstract
> > > > Class.
> > > > this is what I've been using for 0.9.7 and it works fine.  this
>fails
> > >under
> > > > 0.9.8
> > > > when trying to get a uniqueid for each of the extent classes.  I
>think
> > >this
> > > > is what
> > > > you are describing in your last suggestion.
> > > >
> > > > thanks,
> > > > Dave
> > > >
> > > >
> > > > >HTH
> > > > >regards,
> > > > >Armin
> > > > >
> > > > > >
> > > > > > This all worked fine in 0.9.7, but perhaps there has been some
> > >change
> > > > > > in the semantics?  We put the necessary table elements in each
> > > > >instance
> > > > > > of the class but not in the table for the base class (which
> > >actually
> > > > > > doesn't exist).
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > Dave
> > > > > >
> > > > > > At 11:35 AM 12/30/2002 -0700, David Forslund wrote:
> > > > > > >When I put a check inside of the getFieldDescriptor, I find
>that
> > >it
> > > > >is
> > > > > > >being called by HighLowSequence
> > > > > > >with the argument ojbConcreteClass and is returning a null
>for
> > >the
> > > > > > >field.  Is this what is expected?
> > > > > > >
> > > > > > >Dave
> > > > > > >
> > > > > > >At 10:43 AM 12/30/2002 -0700, David Forslund wrote:
> > > > > > >>It wasn't null in my code that called the OJB code.  This
>code
> > >has
> > > > >been
> > > > > > >>working fine in 0.9.7.    If the xml needed to change for
>some
> > > > >reason,
> > > > > > >>it might have caused this.  I'm passing in a string of a
> > >variable
> > > > >that
> > > > > > >>is defined in my table.   Whether OJB properly connects a
> > >"Field"
> > > > > > >>to that table is where the problem may be.   It did in the
>past
> > > > >without
> > > > > > >>any problem.   I have a hard time telling exactly what
>changed
> > > > >between
> > > > > > >>these two versions.
> > > > > > >>
> > > > > > >>Thanks,
> > > > > > >>
> > > > > > >>Dave
> > > > > > >>At 01:49 PM 12/30/2002 +0100, Armin Waibel wrote:
> > > > > > >>>Hi David,
> > > > > > >>>
> > > > > > >>>the sequence generator implementation now only generate
> > > > > > >>>id's for fields declared in the repository.
> > > > > > >>>I think you got this NullPointerException, because SM get a
> > > > > > >>>'null' field:
> > > > > > >>>
> > > > > > >>><snip SequenceManagerHelper>
> > > > > > >>>public static String buildSequenceName(
> > > > > > >>>PersistenceBroker brokerForClass, FieldDescriptor field)
> > > > > > >>>     {
> > > > > > >>>48--->!!! ClassDescriptor cldTargetClass =
> > > > >field.getClassDescriptor();
> > > > > > >>>                 String seqName = field.getSequenceName();
> > > > > > >>>.....
> > > > > > >>></snip>
> > > > > > >>>
> > > > > > >>>So check your code if the given FiledDescriptor wasn't
>null.
> > > > > > >>>
> > > > > > >>>HTH
> > > > > > >>>
> > > > > > >>>regards,
> > > > > > >>>Armin
> > > > > > >>>
> > > > > > >>>----- Original Message -----
> > > > > > >>>From: "David Forslund" <dw...@lanl.gov>
> > > > > > >>>To: "OJB Users List" <oj...@jakarta.apache.org>
> > > > > > >>>Sent: Monday, December 30, 2002 1:33 AM
> > > > > > >>>Subject: Re: problems with ojb 0.9.8
> > > > > > >>>
> > > > > > >>>
> > > > > > >>> > I'm trying to upgrade from 0.9.7 to 0.9.8 and am having
>some
> > > > >problems
> > > > > > >>>that
> > > > > > >>> > I don't understand yet.
> > > > > > >>> >
> > > > > > >>> > I'm getting the warning about not finding an
>autoincrement
> > > > >attribute
> > > > > > >>>for a
> > > > > > >>> > class.  I'm not sure when
> > > > > > >>> > I have to have an autoincrement attribute, but the
> > >primarykey
> > > > >for the
> > > > > > >>>class
> > > > > > >>> > I'm using is a varchar
> > > > > > >>> > so that autoincrement doesn't seem appropriate.
> > > > > > >>> >
> > > > > > >>> > Subsequently, I get an null pointer exception error in
>the
> > > > > > >>> > SequenceManagerHelper that I don't understand:
> > > > > > >>> > java.lang.NullPointerException
> > > > > > >>> >          at
> > > > > > >>> >
> > > > > >
> > > > >
> > >
> >>>org.apache.ojb.broker.util.sequence.SequenceManagerHelper.buildSequen
> > > > >ceN
> > > > > > >>>ame(SequenceManagerHelper.java:48)
> > > > > > >>> >          at
> > > > > > >>> >
> > > > > >
> > > > >
> > >
> >>>org.apache.ojb.broker.util.sequence.SequenceManagerHiLoImpl.getUnique
> > > > >Id(
> > > > > > >>>SequenceManagerHiLoImpl.java:49)
> > > > > > >>> >          at
> > > > > > >>> >
> > > > > >
> > > > >
> > >
> >>>org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.getUniqueId(Pers
> > > > >ist
> > > > > > >>>enceBrokerImpl.java:2258)
> > > > > > >>> >          at
> > > > > > >>> >
> > > > > >
> > > > >
> > >
> >>>org.apache.ojb.broker.singlevm.DelegatingPersistenceBroker.getUniqueI
> > > > >d(D
> > > > > > >>>elegatingPersistenceBroker.java:242)
> > > > > > >>> >          at
> > > > > > >>> >
> > > >
> >gov.lanl.Database.OJBDatabaseMgr.getNextSeq(OJBDatabaseMgr.java:582)
> > > > > > >>> >          at
> > > > > > >>> >
> > > > > >
> > > > >
> > >
> >>>gov.lanl.COAS.AbstractObservationValue_.<init>(AbstractObservationVal
> > > > >ue_
> > > > > > >>>.java:54)
> > > > > > >>> >          at gov.lanl.COAS.Empty_.<init>(Empty_.java:31)
> > > > > > >>> >
> > > > > > >>> > I'm pretty sure that it is being called correctly from
>my
> > >code
> > > > >(which
> > > > > > >>>works
> > > > > > >>> > fine in 0.9.7), but it is failing now.
> > > > > > >>> >
> > > > > > >>> > An unrelated warning in a different application is that
>OJB
> > >says
> > > > >I
> > > > > > >>>should
> > > > > > >>> > use addLike() for using LIKE, but it
> > > > > > >>> > seems to use the right code anyway.  Is this just a
> > >deprecation
> > > > >issue?
> > > > > > >>>I
> > > > > > >>> > don't see why it bothers
> > > > > > >>> > to tell me this, if it can figure what to do anyway.
> > > > > > >>> >
> > > > > > >>> > Thanks,
> > > > > > >>> >
> > > > > > >>> > Dave
> > > > > > >>> >
> > > > > > >>> >
> > > > > > >>> > --
> > > > > > >>> > To unsubscribe, e-mail:
> > > > > > >>><ma...@jakarta.apache.org>
> > > > > > >>> > For additional commands, e-mail:
> > > > > > >>><ma...@jakarta.apache.org>
> > > > > > >>> >
> > > > > > >>> >
> > > > > > >>> >
> > > > > > >>>
> > > > > > >>>
> > > > > > >>>--
> > > > > > >>>To unsubscribe, e-mail:
> > > > ><ma...@jakarta.apache.org>
> > > > > > >>>For additional commands, e-mail:
> > > > ><ma...@jakarta.apache.org>
> > > > > > >>
> > > > > > >>
> > > > > > >>--
> > > > > > >>To unsubscribe, e-mail:
> > > > ><ma...@jakarta.apache.org>
> > > > > > >>For additional commands, e-mail:
> > > > ><ma...@jakarta.apache.org>
> > > > > > >
> > > > > > >
> > > > > > >--
> > > > > > >To unsubscribe, e-mail:
> > > > ><ma...@jakarta.apache.org>
> > > > > > >For additional commands, e-mail:
> > > > ><ma...@jakarta.apache.org>
> > > > > >
> > > > > >
> > > > > > --
> > > > > > To unsubscribe, e-mail:
> > > > ><ma...@jakarta.apache.org>
> > > > > > For additional commands, e-mail:
> > > > ><ma...@jakarta.apache.org>
> > > > > >
> > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >--
> > > > >To unsubscribe, e-mail:
> > ><ma...@jakarta.apache.org>
> > > > >For additional commands, e-mail:
> > ><ma...@jakarta.apache.org>
> > > >
> > > >
> > > > --
> > > > To unsubscribe, e-mail:
> > ><ma...@jakarta.apache.org>
> > > > For additional commands, e-mail:
> > ><ma...@jakarta.apache.org>
> > > >
> > > >
> > > >
> > >
> > >
> > >--
> > >To unsubscribe, e-mail:
><ma...@jakarta.apache.org>
> > >For additional commands, e-mail:
><ma...@jakarta.apache.org>
> >
> >
> > --
> > To unsubscribe, e-mail:
><ma...@jakarta.apache.org>
> > For additional commands, e-mail:
><ma...@jakarta.apache.org>
> >
> >
> >
>
>
>--
>To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
>For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: problems with ojb 0.9.8

Posted by Armin Waibel <ar...@code-au-lait.de>.
Hi again,

> > > >
> > > > > I see what the problem is, but am not sure what the solution
is.
> > > > >
> > > > > I have a an abstract class that is implemented with a number
of
> > > >classes.
> > > > > I'm trying to create a unique key for an instance class, but
when
> >I
> > > > > check there are no field descriptors for the base class.
> > > >
> > > >Have you tried
> > > >Class realClass = abstractBaseClass.getClass();
> > > >ClassDescriptor cld = broker.getClassDescriptor(realClass);
> > > >to get the real class descriptor? Then it should possible to get
the
> > > >field.
> > >
> > > This doesn't help because I'm just calling the getUniqueId within
OJB
> > > and I don't have any control over what it does except through
> > > the repository.
> >
> >
> >I do not understand this. You declare your 'valueId' as a
autoincrement
> >field, but in your stack trace it seems you do a direct call
> >PB.getUniqueId?
>
> Well I did add this because 0.9.8 was complaining about this field
being
> absent.  I have removed it without any change in the behavior.
>
>
> > > >
> >
>>>org.apache.ojb.broker.singlevm.DelegatingPersistenceBroker.getUniqueI
> > > > > >>> >
> > >
>gov.lanl.Database.OJBDatabaseMgr.getNextSeq(OJBDatabaseMgr.java:582)
> > > > > >>> >          at
> > > > > >>> >
> > > > >
> > > >
> >
>>>gov.lanl.COAS.AbstractObservationValue_.<init>(AbstractObservationVal
> >
> >Could you post a code snip to ease my understanding?
> >But by the way this seems to be a bug.
>
> I'm not sure what you mean by a code snippet.  When I call the class
> constructor,
> I call getUniqueId with the class name and attribute:
>
> This is the specific method I call .
> /**
>       * return next number in a persistent sequence
>       */
>      public long getNextSeq(Class clazz, String fieldName) {
>          cat.debug("getNextSeq: "+clazz.getName() + " "+fieldName);
>          // return sequenceManager.getUniqueId(clazz, fieldName);
>          try {
                // get the CLD for the base class
                ClassDescriptor cld = broker.getClassDescriptor(clazz);
                if( (cld.isAbstract || cld.isInterface()) &&
cld.isExtent())
                {
                    // get the first found extent class
                      clazz = cld.getExtentClasses().get(0) // we grap
the first
                }

>              return broker.getUniqueId(clazz, fieldName);
>          } catch (org.apache.ojb.broker.PersistenceBrokerException e)
{
>              cat.error("Can't get ID from broker: " + clazz.getName()
+ " "
> + fieldName, e);
>
>              // System.exit(1);
>              return 0;
>          }
>      }
>
Maybe this could be a workaround for your problem.
Keep in mind that getUniqueId(clazz, fieldName) was deprecated
and will be replaced by getUniqueId(FieldDescriptor field).

What I don't understand is, why you need a getNextSeq method,
when you define autoincrement fields? OJB does all sequence key
generation automatic for you.

HTH
regards,
Armin

> thanks,
>
> Dave
>
>
> >Armin
> >
> >
> > >
> > > >Or define your base class with all fields in the repository file
and
> > > >declare
> > > >all extent-classes in the class-descriptor. Then the default
sequence
> > > >manager implementations should be able to generate a id unique
> > > >across all extents.
> > > >Or define only the abstract class with all extent-classes, then
you
> > > >should be
> > > >able to get one of the extent classes.
> > >
> > > This is how I have it defined in my repository_user.xml
> > >
> > >    <class-descriptor class="gov.lanl.COAS.ObservationValue_">
> > >      <extent-class class-ref="gov.lanl.COAS.Multimedia_"/>
> > >      <extent-class class-ref="gov.lanl.COAS.NoInformation_"/>
> > >      <extent-class class-ref="gov.lanl.COAS.Numeric_"/>
> > >      <extent-class class-ref="gov.lanl.COAS.ObservationId_"/>
> > >      <extent-class class-ref="gov.lanl.COAS.QualifiedCodeInfo_"/>
> > >      <extent-class class-ref="gov.lanl.COAS.QualifiedPersonId_"/>
> > >      <extent-class class-ref="gov.lanl.COAS.Range_"/>
> > >      <extent-class class-ref="gov.lanl.COAS.String_"/>
> > >      <extent-class class-ref="gov.lanl.COAS.TimeSpan_"/>
> > >      <extent-class
> >class-ref="gov.lanl.COAS.UniversalResourceIdentifier_"/>
> > >      <extent-class class-ref="gov.lanl.COAS.Empty_"/>
> > >    </class-descriptor>
> > >
> > > and an example for one of the extent classes
> > >
> > >   <class-descriptor
> > >         isolation-level="read-uncommitted"
> > >         class="gov.lanl.COAS.Empty_"
> > >         table="OjbEmpty_"
> > >   >
> > >     <field-descriptor id="1"
> > >         name="valueId"
> > >         jdbc-type="INTEGER"
> > >         column="valueId"
> > >         primarykey="true"
> > >         autoincrement="true"
> > >     />
> > >
> > >   </class-descriptor>
> > >
> > > there is no table for the ObservationValue_ class because it is an
> >Abstract
> > > Class.
> > > this is what I've been using for 0.9.7 and it works fine.  this
fails
> >under
> > > 0.9.8
> > > when trying to get a uniqueid for each of the extent classes.  I
think
> >this
> > > is what
> > > you are describing in your last suggestion.
> > >
> > > thanks,
> > > Dave
> > >
> > >
> > > >HTH
> > > >regards,
> > > >Armin
> > > >
> > > > >
> > > > > This all worked fine in 0.9.7, but perhaps there has been some
> >change
> > > > > in the semantics?  We put the necessary table elements in each
> > > >instance
> > > > > of the class but not in the table for the base class (which
> >actually
> > > > > doesn't exist).
> > > > >
> > > > > Thanks,
> > > > >
> > > > > Dave
> > > > >
> > > > > At 11:35 AM 12/30/2002 -0700, David Forslund wrote:
> > > > > >When I put a check inside of the getFieldDescriptor, I find
that
> >it
> > > >is
> > > > > >being called by HighLowSequence
> > > > > >with the argument ojbConcreteClass and is returning a null
for
> >the
> > > > > >field.  Is this what is expected?
> > > > > >
> > > > > >Dave
> > > > > >
> > > > > >At 10:43 AM 12/30/2002 -0700, David Forslund wrote:
> > > > > >>It wasn't null in my code that called the OJB code.  This
code
> >has
> > > >been
> > > > > >>working fine in 0.9.7.    If the xml needed to change for
some
> > > >reason,
> > > > > >>it might have caused this.  I'm passing in a string of a
> >variable
> > > >that
> > > > > >>is defined in my table.   Whether OJB properly connects a
> >"Field"
> > > > > >>to that table is where the problem may be.   It did in the
past
> > > >without
> > > > > >>any problem.   I have a hard time telling exactly what
changed
> > > >between
> > > > > >>these two versions.
> > > > > >>
> > > > > >>Thanks,
> > > > > >>
> > > > > >>Dave
> > > > > >>At 01:49 PM 12/30/2002 +0100, Armin Waibel wrote:
> > > > > >>>Hi David,
> > > > > >>>
> > > > > >>>the sequence generator implementation now only generate
> > > > > >>>id's for fields declared in the repository.
> > > > > >>>I think you got this NullPointerException, because SM get a
> > > > > >>>'null' field:
> > > > > >>>
> > > > > >>><snip SequenceManagerHelper>
> > > > > >>>public static String buildSequenceName(
> > > > > >>>PersistenceBroker brokerForClass, FieldDescriptor field)
> > > > > >>>     {
> > > > > >>>48--->!!! ClassDescriptor cldTargetClass =
> > > >field.getClassDescriptor();
> > > > > >>>                 String seqName = field.getSequenceName();
> > > > > >>>.....
> > > > > >>></snip>
> > > > > >>>
> > > > > >>>So check your code if the given FiledDescriptor wasn't
null.
> > > > > >>>
> > > > > >>>HTH
> > > > > >>>
> > > > > >>>regards,
> > > > > >>>Armin
> > > > > >>>
> > > > > >>>----- Original Message -----
> > > > > >>>From: "David Forslund" <dw...@lanl.gov>
> > > > > >>>To: "OJB Users List" <oj...@jakarta.apache.org>
> > > > > >>>Sent: Monday, December 30, 2002 1:33 AM
> > > > > >>>Subject: Re: problems with ojb 0.9.8
> > > > > >>>
> > > > > >>>
> > > > > >>> > I'm trying to upgrade from 0.9.7 to 0.9.8 and am having
some
> > > >problems
> > > > > >>>that
> > > > > >>> > I don't understand yet.
> > > > > >>> >
> > > > > >>> > I'm getting the warning about not finding an
autoincrement
> > > >attribute
> > > > > >>>for a
> > > > > >>> > class.  I'm not sure when
> > > > > >>> > I have to have an autoincrement attribute, but the
> >primarykey
> > > >for the
> > > > > >>>class
> > > > > >>> > I'm using is a varchar
> > > > > >>> > so that autoincrement doesn't seem appropriate.
> > > > > >>> >
> > > > > >>> > Subsequently, I get an null pointer exception error in
the
> > > > > >>> > SequenceManagerHelper that I don't understand:
> > > > > >>> > java.lang.NullPointerException
> > > > > >>> >          at
> > > > > >>> >
> > > > >
> > > >
> >
>>>org.apache.ojb.broker.util.sequence.SequenceManagerHelper.buildSequen
> > > >ceN
> > > > > >>>ame(SequenceManagerHelper.java:48)
> > > > > >>> >          at
> > > > > >>> >
> > > > >
> > > >
> >
>>>org.apache.ojb.broker.util.sequence.SequenceManagerHiLoImpl.getUnique
> > > >Id(
> > > > > >>>SequenceManagerHiLoImpl.java:49)
> > > > > >>> >          at
> > > > > >>> >
> > > > >
> > > >
> >
>>>org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.getUniqueId(Pers
> > > >ist
> > > > > >>>enceBrokerImpl.java:2258)
> > > > > >>> >          at
> > > > > >>> >
> > > > >
> > > >
> >
>>>org.apache.ojb.broker.singlevm.DelegatingPersistenceBroker.getUniqueI
> > > >d(D
> > > > > >>>elegatingPersistenceBroker.java:242)
> > > > > >>> >          at
> > > > > >>> >
> > >
>gov.lanl.Database.OJBDatabaseMgr.getNextSeq(OJBDatabaseMgr.java:582)
> > > > > >>> >          at
> > > > > >>> >
> > > > >
> > > >
> >
>>>gov.lanl.COAS.AbstractObservationValue_.<init>(AbstractObservationVal
> > > >ue_
> > > > > >>>.java:54)
> > > > > >>> >          at gov.lanl.COAS.Empty_.<init>(Empty_.java:31)
> > > > > >>> >
> > > > > >>> > I'm pretty sure that it is being called correctly from
my
> >code
> > > >(which
> > > > > >>>works
> > > > > >>> > fine in 0.9.7), but it is failing now.
> > > > > >>> >
> > > > > >>> > An unrelated warning in a different application is that
OJB
> >says
> > > >I
> > > > > >>>should
> > > > > >>> > use addLike() for using LIKE, but it
> > > > > >>> > seems to use the right code anyway.  Is this just a
> >deprecation
> > > >issue?
> > > > > >>>I
> > > > > >>> > don't see why it bothers
> > > > > >>> > to tell me this, if it can figure what to do anyway.
> > > > > >>> >
> > > > > >>> > Thanks,
> > > > > >>> >
> > > > > >>> > Dave
> > > > > >>> >
> > > > > >>> >
> > > > > >>> > --
> > > > > >>> > To unsubscribe, e-mail:
> > > > > >>><ma...@jakarta.apache.org>
> > > > > >>> > For additional commands, e-mail:
> > > > > >>><ma...@jakarta.apache.org>
> > > > > >>> >
> > > > > >>> >
> > > > > >>> >
> > > > > >>>
> > > > > >>>
> > > > > >>>--
> > > > > >>>To unsubscribe, e-mail:
> > > ><ma...@jakarta.apache.org>
> > > > > >>>For additional commands, e-mail:
> > > ><ma...@jakarta.apache.org>
> > > > > >>
> > > > > >>
> > > > > >>--
> > > > > >>To unsubscribe, e-mail:
> > > ><ma...@jakarta.apache.org>
> > > > > >>For additional commands, e-mail:
> > > ><ma...@jakarta.apache.org>
> > > > > >
> > > > > >
> > > > > >--
> > > > > >To unsubscribe, e-mail:
> > > ><ma...@jakarta.apache.org>
> > > > > >For additional commands, e-mail:
> > > ><ma...@jakarta.apache.org>
> > > > >
> > > > >
> > > > > --
> > > > > To unsubscribe, e-mail:
> > > ><ma...@jakarta.apache.org>
> > > > > For additional commands, e-mail:
> > > ><ma...@jakarta.apache.org>
> > > > >
> > > > >
> > > > >
> > > >
> > > >
> > > >--
> > > >To unsubscribe, e-mail:
> ><ma...@jakarta.apache.org>
> > > >For additional commands, e-mail:
> ><ma...@jakarta.apache.org>
> > >
> > >
> > > --
> > > To unsubscribe, e-mail:
> ><ma...@jakarta.apache.org>
> > > For additional commands, e-mail:
> ><ma...@jakarta.apache.org>
> > >
> > >
> > >
> >
> >
> >--
> >To unsubscribe, e-mail:
<ma...@jakarta.apache.org>
> >For additional commands, e-mail:
<ma...@jakarta.apache.org>
>
>
> --
> To unsubscribe, e-mail:
<ma...@jakarta.apache.org>
> For additional commands, e-mail:
<ma...@jakarta.apache.org>
>
>
>


Re: %HOME% on Win32 (Re: ra_dav compression question)

Posted by Branko Čibej <br...@xbc.nu>.
Chris Hecker wrote:

> You can already change your %APPDATA% setting to, say, a subdirectory of
>
>> your %HOME%. So you can get almost exactly what you want without
>> changing a single line of SVN code.
>
>
> This will destroy the universe, I'm fairly sure.  Windows is really
> bad about that kind of thing.  I've never tried it, however.  Have you?

Yes, I have. It works. And these days, Windows is really _good_ about
these things; only programs that don't conform to the Windows
conventions have problems, and that'w what I want to avoid.

>> However, Subversion also _creates_ that directory; how does it know
>> which you prefer?
>
>
> Sorry, I wasn't clear.  I was saying that if it finds that directory
> is not there, it will check for %HOME%/.subversion as well.  That's
> all.  So, most of the time it will just use %APPDATA% for people who
> don't care, but if somebody does then they'll move the directory and
> svn will do the right thing. 

I see. So you create %HOME%/.subversion before running SVN for the first
time, nad make sure the %APPDATA%/Subversion doesn't exist, and from
that point on things should just work?

Hm.

> From looking at it briefly, it'll be very little code to make this
> change, only a few lines in one function
> (svn_config__user_config_path) and one line in the config_impl.h
> header to do it right. 

Oh, no. All of this logic should go into config_win.c, and nowhere else.
It's _different_ than on Unix, even with just the directory check.

> That won't work, .lnk files are handled by the Explorer, not the
>
>> filesystem. However, on Win2k and above, you can make
>> %APPDATA%\Subversion a junction to anywhere you like -- again, not
>> having to change a single line of code in SVN.
>
>
> Except junctions aren't very well documented or supported

Yes, they are, if you look at the right place. Also, the "dir" command
will show junctions.

> and there are reports of filesystem weirdness when using them.

What kind? I've been using them for years, with no percieved weirdness.
The main thing is that they're supported on the filesystem level. The
drawback is that they're NTFS-only, but if you ask me, people who don't
use NTFS are in for trouble anyway.

> Again, I'm happy to write the 10 lines of code and test them and send
> a patch to the HEAD, I just want to know it will be considered before
> I do that.  If you're absolutely not interested I won't bother. 
> However, I think the feature is sound and this is the best of the
> options.

This discussion is similar to the one about whether the SVN command-line
client should print paths with \ or / as a separator. As always, my
first answer is to conform to the OS native conventions.

What you're proposing complicates the code, but it's just possible that
the complication isn't too big. So, let's see the patch first, and
decide afterwards.


-- 
Brane Čibej   <br...@xbc.nu>   http://www.xbc.nu/brane/


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: [users@httpd] Error :usr/include/unistd.h:1058: error: expected ')' before '[' token

Posted by Eric Covener <co...@gmail.com>.
The wiki has AIX info for xlc and maybe even gcc -- but as a later
poster said, xlc is the path of least resistance.

On Fri, Nov 15, 2013 at 10:37 AM, Stormy <st...@stormy.ca> wrote:
> At 07:36 PM 11/15/2013 +0800, you wrote:
>>
>> Dear Paul,
>> Thank you for your reply.
>> Now I am trying to install (httpd 2.2.25) version  and we get the below
>> error. Kindly help us to resolve the problem.
>
>
> Sorry - I have little (very old) or no experience on AIX; anyone else?
>
> Paul
>
>
>> make
>> Making all in srclib
>> Making all in apr
>>         /bin/sh /GPFS/install/SAP/apache/httpd-2.2.25/srclib/apr/libtool
>> --silent --mode=compile gcc -g -O2   -DHAVE_CONFIG_H -U__STR__
>> -D_THREAD_SAFE   -I./include
>> -I/GPFS/install/SAP/apache/httpd-2.2.25/srclib/apr/include/arch/unix
>> -I./include/arch/unix
>> -I/GPFS/install/SAP/apache/httpd-2.2.25/srclib/apr/include/arch/unix
>> -I/GPFS/install/SAP/apache/httpd-2.2.25/srclib/apr/include  -o
>> file_io/unix/filedup.lo -c file_io/unix/filedup.c && touch
>> file_io/unix/filedup.lo
>> file_io/unix/filedup.c: In function 'file_dup':
>> file_io/unix/filedup.c:49: error: 'F_GETFD' undeclared (first use in this
>> function)
>> file_io/unix/filedup.c:49: error: (Each undeclared identifier is reported
>> only once
>> file_io/unix/filedup.c:49: error: for each function it appears in.)
>> file_io/unix/filedup.c:52: error: 'FD_CLOEXEC' undeclared (first use in
>> this function)
>> file_io/unix/filedup.c:53: error: 'F_SETFD' undeclared (first use in this
>> function)
>> make: 1254-004 The error code from the last command is 1.
>>
>> Stop.
>> make: 1254-004 The error code from the last command is 1.
>> Stop.
>> make: 1254-004 The error code from the last command is 1.
>>
>>
>> Thanks & Regards,
>> Baskaran.V
>>
>>
>>
>>
>>
>> On Friday, 15 November 2013 1:16 AM, Stormy <st...@stormy.ca> wrote:
>> At 06:49 AM 11/15/2013 +0800, you wrote:
>> >Dear All,
>> >
>> >I did not find the solution in google and archiving mail list .
>> >Hence I am posting my issue here.
>> >I am trying to install Apache (httpd-2.0.64)
>>                                 ^^^^^^^^^^^^^
>> <http://www.apache.org/dist/httpd/Announcement2.0.html> "The Apache
>> Software Foundation and the Apache HTTP Server Project are pleased to
>> announce the final 2.0 release version 2.0.65 of the Apache HTTP Server
>> ("Apache"). This version of Apache will be the last 2.0 bug and security
>> fix release, covering many but not all issues addressed in the stable 2.4
>> and legacy 2.2 released versions."
>>
>> Personally, I don't always chase the absolute latest version, but 2.2.x
>> (which I use on hundreds of servers) should be the "legacy" version of
>> choice (2.4 might allow you more forward flexibility.)
>>
>> Why 2.0?
>>
>> Best - Paul
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
>> For additional commands, e-mail: users-help@httpd.apache.org
>>
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
> For additional commands, e-mail: users-help@httpd.apache.org
>



-- 
Eric Covener
covener@gmail.com

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: Bayes expiry not working?

Posted by Theo Van Dinter <fe...@kluge.net>.
On Fri, Apr 16, 2004 at 12:31:42AM +0200, Kai Schaetzl wrote:
> Correct, Theo?

That description looked right, yes.

-- 
Randomly Generated Tagline:
"Seat Belt unbuckled, door open, and the driver is probably inebriated..."
                      - Prof. Vaz

Re: Bayes expiry not working?

Posted by Kai Schaetzl <ma...@conactive.com>.
Theo Van Dinter wrote on Thu, 15 Apr 2004 15:45:01 -0400:

> Ah.  You have atimes in the future.  (age of tokens are based on
> difference from "newest" (aka latest) token).
>

What Theo means: the estimation pass will go down from the newest atime in 
the magic summary (which should be in the future) 12 hours and check how 
many tokens would expire (all tokens older than 12 hours = all except 4). 
And then go down in steps to 24, 48 and so on hours and do the same. This 
is reflected in the breakdown table you posted earlier.
Correct, Theo?


Kai

-- 

Kai Sch�tzl, Berlin, Germany
Get your web at Conactive Internet Services: http://www.conactive.com
IE-Center: http://ie5.de & http://msie.winware.org




Re: Bayes expiry not working?

Posted by Nels Lindquist <nl...@maei.ca>.
On 15 Apr 2004 at 15:45, Theo Van Dinter wrote:

> On Thu, Apr 15, 2004 at 01:23:04PM -0600, Nels Lindquist wrote:
> > There are some bizarre entries as well; there were about 10 records 
> > with atimes of 0, and four entries with atimes in the future 
> > (September and October 2004).
> > 
> > So why is the expiry code deciding that all the atimes are old?
> 
> Ah.  You have atimes in the future.  (age of tokens are based on
> difference from "newest" (aka latest) token).

Well, the obvious next question is, "How do I fix it?" :-)

----
Nels Lindquist <*>
Information Systems Manager
Morningstar Air Express Inc.


Re: Bayes expiry not working?

Posted by Theo Van Dinter <fe...@kluge.net>.
On Thu, Apr 15, 2004 at 01:23:04PM -0600, Nels Lindquist wrote:
> There are some bizarre entries as well; there were about 10 records 
> with atimes of 0, and four entries with atimes in the future 
> (September and October 2004).
> 
> So why is the expiry code deciding that all the atimes are old?

Ah.  You have atimes in the future.  (age of tokens are based on
difference from "newest" (aka latest) token).

-- 
Randomly Generated Tagline:
Personally, I like to defiantly split my infinitives.  :-)
              -- Larry Wall in <19...@wall.org>

Re: [logging] LogConfigurationException

Posted by Andreas Probst <an...@gmx.net>.
Hi Achim,

I've already realized this. However, as I'd like to use the 
Swing or AWT, I'll stick to log4j.
Nevertheless, thanks.

Andreas


On 5 Jun 2003 at 22:05, Achim Felber wrote:

> Andreas,
> 
> there is a problem with the swingui TestRunner. The aformentioned
> class loader problem is probably to blame.
> I usually use the textui TestRunner (see below) in which case 
> your example works just fine.
> 
> Achim
> 
> ==========================================
>   public static void main(String args[])
>   {
>     LogFactory.getLog("test").debug("Seems to work ...!");
> //    junit.swingui.TestRunner.run(TestINIGet.class);
>     junit.textui.TestRunner.run(new TestSuite(TestINIGet.class));
>   }
> ==========================================
> 
> On Wed, Jun 04, 2003 at 08:03:27AM +0200, Andreas Probst wrote:
> > Thank you both Achim and Anthony for your replies.
> > 
> > I still can't get it to work. Achim, I tried your code - and it 
> > works. Then I commented the junit-call out of the main method - 
> > and it works.
> > 
> >     public static void main(String[] args)
> >     {
> >         //BasicConfigurator.configure();
> >         logger.debug("test");
> >         junit.swingui.TestRunner.run(AllTests.class);
> >     }
> > 
> > If I run the code above, the "test" gets logged as expected, but 
> > after that the exception below is thrown.
> > 
> > Is there a known issue with JUnit 3.7?
> > 
> > log4j: Finished configuring.
> > 04 Jun 2003 07:50:21,122 DEBUG AllTests: test
> > Exception in thread "main" java.lang.ExceptionInInitializerError
> >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> > Method)
> >         at 
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorI
> > mpl.java:39)
> >         at 
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodA
> > ccessorImpl.java:25)
> >         at java.lang.reflect.Method.invoke(Method.java:324)
> >         at 
> > junit.runner.BaseTestRunner.getTest(BaseTestRunner.java:111)
> >         at junit.awtui.TestRunner.runSuite(TestRunner.java:455)
> >         at junit.awtui.TestRunner.start(TestRunner.java:536)
> >         at junit.awtui.TestRunner.main(TestRunner.java:382)
> >         at junit.awtui.TestRunner.run(TestRunner.java:387)
> >         at de...AllTests.main(AllTests.java:29)
> > Caused by: org.apache.commons.logging.LogConfigurationException: 
> > org.apache.commons.logging.LogConfigurationException: 
> > org.apache.commons.logging.LogConfigurationException: Class 
> > org.apache.commons.logging.impl.Log4JLogger does not implement 
> > Log
> >         at 
> > org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFac
> > toryImpl.java:532)
> >         at 
> > org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFac
> > toryImpl.java:272)
> >         at 
> > org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFac
> > toryImpl.java:246)
> >         at 
> > org.apache.commons.logging.LogFactory.getLog(LogFactory.java:395)
> >         at de...AllTests.<clinit>(AllTests.java:21)
> >         ... 10 more
> > Caused by: org.apache.commons.logging.LogConfigurationException: 
> > org.apache.commons.logging.LogConfigurationException: Class 
> > org.apache.commons.logging.impl.Log4JLogger does not implement 
> > Log
> >         at 
> > org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(
> > LogFactoryImpl.java:416)
> >         at 
> > org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFac
> > toryImpl.java:525)
> >         ... 14 more
> > Caused by: org.apache.commons.logging.LogConfigurationException: 
> > Class org.apache.commons.logging.impl.Log4JLogger does not 
> > implement Log
> >         at 
> > org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(
> > LogFactoryImpl.java:412)
> >         ... 15 more
> > 
> > Andreas
> > 
> > On 2 Jun 2003 at 22:50, Achim Felber wrote:
> > 
> > > Andreas,
> > > 
> > > I think Anthony might be right; you could be using incompatible versions
> > > of Log4J and commons-logging. Below is an application that worked for me. 
> > > Make sure Log4J finds its configuration file. The debug switch can be quite 
> > > helpfull too. ;-)
> > > 
> > > Regards,
> > > Achim
> > > ===========================================================
> > > import org.apache.commons.logging.*;
> > > import org.apache.log4j.*;
> > > 
> > > public class LogTest
> > > {
> > >   public static void main(String[] args)
> > >   {
> > >     LogFactory.getLog("TestLog").debug("Seems to work ...!");
> > >   }
> > > }
> > > 
> > > // java -Dlog4j.debug=true -Dlog4j.configuration=file:/c:/data/sort4.properties LogTest
> > > 
> > > On Mon, Jun 02, 2003 at 01:58:27PM -0400, Anthony Eden wrote:
> > > > You can also put the log4j.properties file in a location which is known 
> > > > to Log4J.  Take a look at the Default Initialization Procedure section 
> > > > in the short manual ( http://jakarta.apache.org/log4j/docs/manual.html ) 
> > > > for information on how Log4J attempts to find the properties file.
> > > > 
> > > > As for the specific problem, make sure you are using both the current 
> > > > version of Log4J and the current version of Commons Logging.  Also, 
> > > > check your whole classpath for other copies of either Commons Logging or 
> > > > Log4J.
> > > > 
> > > > Sincerely,
> > > > Anthony Eden
> > > > 
> > > > Andreas Probst wrote:
> > > > >Hi Achim,
> > > > >
> > > > >thank you for your answer. You're right - I would have to change 
> > > > >the configuration only once. It's not what I expected, but OK, I 
> > > > >think I can live with it.
> > > > >
> > > > >However, putting an 
> > > > >BasicConfigurator.configure();
> > > > >into the main() method doesn't solve the current problem:
> > > > >
> > > > >Caused by: org.apache.commons.logging.LogConfigurationException: 
> > > > >org.apache.commons.logging.LogConfigurationException: 
> > > > >org.apache.commons.logging.LogConfigurationException: Class 
> > > > >org.apache.commons.logging.impl.Log4JLogger does not implement 
> > > > >Log
> > > > >
> > > > >Does anyone have a clue there?
> > > > >
> > > > >Regards
> > > > >
> > > > >Andreas
> > > 
> > > -- 
> > > Achim Felber
> > > e-mail: afelber@austin.rr.com
> > > 
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
> > > For additional commands, e-mail: commons-user-help@jakarta.apache.org
> > > 
> > 
> > 
> > 
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: commons-user-help@jakarta.apache.org
> > 
> 
> -- 
> Achim Felber
> e-mail: afelber@austin.rr.com
> homepage: http://home.austin.rr.com/afelber/
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: commons-user-help@jakarta.apache.org
> 



Re: [logging] LogConfigurationException

Posted by Achim Felber <af...@austin.rr.com>.
Are there any opinions regarding the lack of log method such as

  public void log(String msg, Object[] parameters, Throwable ex)
  {
    if (logger.isEnabled())
	    logger.log(java.text.MessageFormat.format(msg, parameters), ex)
  }

This method would only create the string to log if the logger is
actually enabled. Also, it might make it easier to internationalize the
code since the strings to change may be easier to locate.

Any pros and cons to this approach?

Achim
-- 
Achim Felber
e-mail: afelber@austin.rr.com

Re: [logging] LogConfigurationException

Posted by Achim Felber <af...@austin.rr.com>.
Andreas,

there is a problem with the swingui TestRunner. The aformentioned
class loader problem is probably to blame.
I usually use the textui TestRunner (see below) in which case 
your example works just fine.

Achim

==========================================
  public static void main(String args[])
  {
    LogFactory.getLog("test").debug("Seems to work ...!");
//    junit.swingui.TestRunner.run(TestINIGet.class);
    junit.textui.TestRunner.run(new TestSuite(TestINIGet.class));
  }
==========================================

On Wed, Jun 04, 2003 at 08:03:27AM +0200, Andreas Probst wrote:
> Thank you both Achim and Anthony for your replies.
> 
> I still can't get it to work. Achim, I tried your code - and it 
> works. Then I commented the junit-call out of the main method - 
> and it works.
> 
>     public static void main(String[] args)
>     {
>         //BasicConfigurator.configure();
>         logger.debug("test");
>         junit.swingui.TestRunner.run(AllTests.class);
>     }
> 
> If I run the code above, the "test" gets logged as expected, but 
> after that the exception below is thrown.
> 
> Is there a known issue with JUnit 3.7?
> 
> log4j: Finished configuring.
> 04 Jun 2003 07:50:21,122 DEBUG AllTests: test
> Exception in thread "main" java.lang.ExceptionInInitializerError
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorI
> mpl.java:39)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodA
> ccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:324)
>         at 
> junit.runner.BaseTestRunner.getTest(BaseTestRunner.java:111)
>         at junit.awtui.TestRunner.runSuite(TestRunner.java:455)
>         at junit.awtui.TestRunner.start(TestRunner.java:536)
>         at junit.awtui.TestRunner.main(TestRunner.java:382)
>         at junit.awtui.TestRunner.run(TestRunner.java:387)
>         at de...AllTests.main(AllTests.java:29)
> Caused by: org.apache.commons.logging.LogConfigurationException: 
> org.apache.commons.logging.LogConfigurationException: 
> org.apache.commons.logging.LogConfigurationException: Class 
> org.apache.commons.logging.impl.Log4JLogger does not implement 
> Log
>         at 
> org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFac
> toryImpl.java:532)
>         at 
> org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFac
> toryImpl.java:272)
>         at 
> org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFac
> toryImpl.java:246)
>         at 
> org.apache.commons.logging.LogFactory.getLog(LogFactory.java:395)
>         at de...AllTests.<clinit>(AllTests.java:21)
>         ... 10 more
> Caused by: org.apache.commons.logging.LogConfigurationException: 
> org.apache.commons.logging.LogConfigurationException: Class 
> org.apache.commons.logging.impl.Log4JLogger does not implement 
> Log
>         at 
> org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(
> LogFactoryImpl.java:416)
>         at 
> org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFac
> toryImpl.java:525)
>         ... 14 more
> Caused by: org.apache.commons.logging.LogConfigurationException: 
> Class org.apache.commons.logging.impl.Log4JLogger does not 
> implement Log
>         at 
> org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(
> LogFactoryImpl.java:412)
>         ... 15 more
> 
> Andreas
> 
> On 2 Jun 2003 at 22:50, Achim Felber wrote:
> 
> > Andreas,
> > 
> > I think Anthony might be right; you could be using incompatible versions
> > of Log4J and commons-logging. Below is an application that worked for me. 
> > Make sure Log4J finds its configuration file. The debug switch can be quite 
> > helpfull too. ;-)
> > 
> > Regards,
> > Achim
> > ===========================================================
> > import org.apache.commons.logging.*;
> > import org.apache.log4j.*;
> > 
> > public class LogTest
> > {
> >   public static void main(String[] args)
> >   {
> >     LogFactory.getLog("TestLog").debug("Seems to work ...!");
> >   }
> > }
> > 
> > // java -Dlog4j.debug=true -Dlog4j.configuration=file:/c:/data/sort4.properties LogTest
> > 
> > On Mon, Jun 02, 2003 at 01:58:27PM -0400, Anthony Eden wrote:
> > > You can also put the log4j.properties file in a location which is known 
> > > to Log4J.  Take a look at the Default Initialization Procedure section 
> > > in the short manual ( http://jakarta.apache.org/log4j/docs/manual.html ) 
> > > for information on how Log4J attempts to find the properties file.
> > > 
> > > As for the specific problem, make sure you are using both the current 
> > > version of Log4J and the current version of Commons Logging.  Also, 
> > > check your whole classpath for other copies of either Commons Logging or 
> > > Log4J.
> > > 
> > > Sincerely,
> > > Anthony Eden
> > > 
> > > Andreas Probst wrote:
> > > >Hi Achim,
> > > >
> > > >thank you for your answer. You're right - I would have to change 
> > > >the configuration only once. It's not what I expected, but OK, I 
> > > >think I can live with it.
> > > >
> > > >However, putting an 
> > > >BasicConfigurator.configure();
> > > >into the main() method doesn't solve the current problem:
> > > >
> > > >Caused by: org.apache.commons.logging.LogConfigurationException: 
> > > >org.apache.commons.logging.LogConfigurationException: 
> > > >org.apache.commons.logging.LogConfigurationException: Class 
> > > >org.apache.commons.logging.impl.Log4JLogger does not implement 
> > > >Log
> > > >
> > > >Does anyone have a clue there?
> > > >
> > > >Regards
> > > >
> > > >Andreas
> > 
> > -- 
> > Achim Felber
> > e-mail: afelber@austin.rr.com
> > 
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: commons-user-help@jakarta.apache.org
> > 
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: commons-user-help@jakarta.apache.org
> 

-- 
Achim Felber
e-mail: afelber@austin.rr.com
homepage: http://home.austin.rr.com/afelber/

Re: [logging] LogConfigurationException

Posted by Andreas Probst <an...@gmx.net>.
Hi Craig,

thank you very much for your comprehensive response. However, I 
did not manage to get it running with my test setup. I suppose 
it's because of the architecture of the graphical UI's in JUnit. 
I stick to log4j at least within my tests.

Thanks again.

Andreas


On 4 Jun 2003 at 23:41, Craig R. McClanahan wrote:

> 
> 
> On Thu, 5 Jun 2003, Andreas Probst wrote:
> 
> > Date: Thu, 05 Jun 2003 07:09:48 +0200
> > From: Andreas Probst <an...@gmx.net>
> > Reply-To: Jakarta Commons Users List <co...@jakarta.apache.org>
> > To: Jakarta Commons Users List <co...@jakarta.apache.org>
> > Subject: Re: [logging] LogConfigurationException
> >
> > Having read the two articles
> >
> > http://www.qos.ch/logging/thinkAgain.html
> > https://secure.zdnet.com.au/builder/program/java/story/0,20000347
> > 79,20272367 ,00.htm
> >
> > I think it's better not to use Commons-Logging in my app and
> > certainly not within the test cases. Maybe I should use it in
> > components which could be reused...
> >
> > Craig, what do you think about the first article?
> 
> I think two things about this article, and then have some additional
> comments:
> 
> * Ceki Gulcu is an incredibly brilliant person, who (pretty
>   much) created Log4J -- a very useful logging implementation
>   that has some unique and powerful capabilities.  Those
>   capabiliites have been created in direct response to requests
>   from users, in the best traditions of open source software.
> 
> * Ceki Gulcu, as the primary author of Log4J, justifiably
>   argues for its viability.  So anything he says about, say,
>   JDK 1.4 logging, needs to be understood in that context.
> 
> However, the primary focus of these articles (a comparison of Log4J versus
> JDK 1.4 logging) is pretty much irrelevant to the customers of most
> Jakarta Commons libraries.  Why?  Because the Jakarta Commons libraries
> that use commons-logging insulate applications using them from having to
> choose one logging implementation or the other.  NOTE CAREFULLY that you
> can easily use a Jakarta Commons package -- even one that uses logging --
> secure in the knowledge that using that Commons package does not force
> *you* to use the same logging implementation that the authors of that
> particular library happened to pick.
> 
> Permit me to spend a few pargraphs venting on this topic ... :-)
> 
> The fundamental issue that causes grief, here in the Commons world, has
> nothing to do with Ceki, or even with Log4J (or even any argument about
> whether Log4J or JDK 1.4 logging is technologically superior).  The real
> problem is that some people expect that commons-logging is something that
> it is not.
> 
> The really important issue:  The commons-logging package ***IS*** a facade
> around a variety of logging implementations, so that a library using it
> (say, commons-digester just as an example) does NOT dictate the logging
> implementation that an application using Digester has to use.  That is the
> one and only purpose for which it was created.  And the fact that the
> Digester developers made the choice to use C-L (disclaimer:  I'm one of
> those developers :-) was a positive step towards making that package
> reusable in more contexts, but not giving up the benefits that embedded
> logging statements can bring.
> 
> The commons-logging package ***IS NOT*** a logging implemntation itself
> (other than the fact that it includes a fallback SimpleLog implementation
> which, in retrospect, might not have really been a good idea because of
> the confusion that has resulted).  It expects that the underlying
> application will have picked whatever logging implementation is
> appropriate, and done its own configuration of that logging environment,
> totally independently of commons-logging.
> 
> A clue that you get it -- you understand that changes in the underlying
> logging implementation automatically affect libraries that use
> commons-logging, without requiring *any* changes in those libraries.  The
> only thing C-L knows is the *name* of a "logger", not where the output of
> that logger is configured to go.
> 
> A clue that you don't get it -- you expect that o.a.c.logging.Log
> instances should themsleves be configurable as independent entities (they
> are supposed to be *invisible* facacdes around the actual logging
> entities).  Or, you expect that commons-logging will provide an API to
> transparently configure, and reconfigure, the underlying logging
> implementation.  Or, that C-L should provide a mechanism to deal with
> application-level facades (in spite of the fact that C-L is already a
> facade, and goes to great lengths to hide itself).
> 
> That is not what commons-logging is for.
> 
> What commons-logging is for, is to allow a function library, such as a
> library you might choose from Jakarta Commons, to be able to use logging
> in its own implementation classes, *without* creating a prerequisite that
> the application using that libarary must use the same logging
> implementation that the library developers happened to like.
> 
> PLEASE DO use commons-logging in your applications where you don't want
> your application to be directly tied to a particular logging
> implementation.  As an example of this scenario, Struts (an MVC framework
> for web applications) uses C-L so that it does *not* force an application
> developer to choose whatever logging implementation that the Struts
> developers happen to like.
> 
> PLEASE DO NOT use commons-logging on the theory that you will be able to
> portably configure, for example, the "foo.bar" logger to be at DEBUG level
> of detail, while the "foo.baz" logger is at the WARN level.  That is
> outside the scope of commons-logging.  You are only going to end up
> frustrated.
> 
> >
> > Regards,
> >
> > Andreas
> >
> 
> Craig
> 
> PS:  The exception message described in the message below is *absolutely*
> and *positively* related to screwing up the class loader hierarchy of the
> test environment, and has nothing to do with commons-logging as such
> (unless they are using a verson of C-L before 1.0.3, all of which still
> had some class loader issues).  Most likely, the tester has exposed the
> C-L classes in more than one class loader (a class "foo.bar" loaded from
> class loader A is not considered to be the same as class "foo.bar" loaded
> from class loader B -- even if the bytecodes happen to be the same) --
> that is pretty much guaranteed to cause fatal problems, no matter what
> particular classes you are trying to load.
> 
> PPS: Specific note to Andreas -- make sure that you are using Commons
> Logging 1.0.3, and also make sure that you are executing your tests in a
> separate JVM so that you don't get messed up by the CLASSPATH that was
> specified on your Ant execution.
> 
> >
> > On 4 Jun 2003 at 8:03, Andreas Probst wrote:
> >
> > > Thank you both Achim and Anthony for your replies.
> > >
> > > I still can't get it to work. Achim, I tried your code - and it
> > > works. Then I commented the junit-call out of the main method -
> > > and it works.
> > >
> > >     public static void main(String[] args)
> > >     {
> > >         //BasicConfigurator.configure();
> > >         logger.debug("test");
> > >         junit.swingui.TestRunner.run(AllTests.class);
> > >     }
> > >
> > > If I run the code above, the "test" gets logged as expected, but
> > > after that the exception below is thrown.
> > >
> > > Is there a known issue with JUnit 3.7?
> > >
> > > log4j: Finished configuring.
> > > 04 Jun 2003 07:50:21,122 DEBUG AllTests: test
> > > Exception in thread "main" java.lang.ExceptionInInitializerError
> > >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> > > Method)
> > >         at
> > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorI
> > > mpl.java:39)
> > >         at
> > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodA
> > > ccessorImpl.java:25)
> > >         at java.lang.reflect.Method.invoke(Method.java:324)
> > >         at
> > > junit.runner.BaseTestRunner.getTest(BaseTestRunner.java:111)
> > >         at junit.awtui.TestRunner.runSuite(TestRunner.java:455)
> > >         at junit.awtui.TestRunner.start(TestRunner.java:536)
> > >         at junit.awtui.TestRunner.main(TestRunner.java:382)
> > >         at junit.awtui.TestRunner.run(TestRunner.java:387)
> > >         at de...AllTests.main(AllTests.java:29)
> > > Caused by: org.apache.commons.logging.LogConfigurationException:
> > > org.apache.commons.logging.LogConfigurationException:
> > > org.apache.commons.logging.LogConfigurationException: Class
> > > org.apache.commons.logging.impl.Log4JLogger does not implement
> > > Log
> > >         at
> > > org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFac
> > > toryImpl.java:532)
> > >         at
> > > org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFac
> > > toryImpl.java:272)
> > >         at
> > > org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFac
> > > toryImpl.java:246)
> > >         at
> > > org.apache.commons.logging.LogFactory.getLog(LogFactory.java:395)
> > >         at de...AllTests.<clinit>(AllTests.java:21)
> > >         ... 10 more
> > > Caused by: org.apache.commons.logging.LogConfigurationException:
> > > org.apache.commons.logging.LogConfigurationException: Class
> > > org.apache.commons.logging.impl.Log4JLogger does not implement
> > > Log
> > >         at
> > > org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(
> > > LogFactoryImpl.java:416)
> > >         at
> > > org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFac
> > > toryImpl.java:525)
> > >         ... 14 more
> > > Caused by: org.apache.commons.logging.LogConfigurationException:
> > > Class org.apache.commons.logging.impl.Log4JLogger does not
> > > implement Log
> > >         at
> > > org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(
> > > LogFactoryImpl.java:412)
> > >         ... 15 more
> > >
> > > Andreas
> > >
> > > On 2 Jun 2003 at 22:50, Achim Felber wrote:
> > >
> > > > Andreas,
> > > >
> > > > I think Anthony might be right; you could be using incompatible versions
> > > > of Log4J and commons-logging. Below is an application that worked for me.
> > > > Make sure Log4J finds its configuration file. The debug switch can be quite
> > > > helpfull too. ;-)
> > > >
> > > > Regards,
> > > > Achim
> > > > ===========================================================
> > > > import org.apache.commons.logging.*;
> > > > import org.apache.log4j.*;
> > > >
> > > > public class LogTest
> > > > {
> > > >   public static void main(String[] args)
> > > >   {
> > > >     LogFactory.getLog("TestLog").debug("Seems to work ...!");
> > > >   }
> > > > }
> > > >
> > > > // java -Dlog4j.debug=true -Dlog4j.configuration=file:/c:/data/sort4.properties LogTest
> > > >
> > > > On Mon, Jun 02, 2003 at 01:58:27PM -0400, Anthony Eden wrote:
> > > > > You can also put the log4j.properties file in a location which is known
> > > > > to Log4J.  Take a look at the Default Initialization Procedure section
> > > > > in the short manual ( http://jakarta.apache.org/log4j/docs/manual.html )
> > > > > for information on how Log4J attempts to find the properties file.
> > > > >
> > > > > As for the specific problem, make sure you are using both the current
> > > > > version of Log4J and the current version of Commons Logging.  Also,
> > > > > check your whole classpath for other copies of either Commons Logging or
> > > > > Log4J.
> > > > >
> > > > > Sincerely,
> > > > > Anthony Eden
> > > > >
> > > > > Andreas Probst wrote:
> > > > > >Hi Achim,
> > > > > >
> > > > > >thank you for your answer. You're right - I would have to change
> > > > > >the configuration only once. It's not what I expected, but OK, I
> > > > > >think I can live with it.
> > > > > >
> > > > > >However, putting an
> > > > > >BasicConfigurator.configure();
> > > > > >into the main() method doesn't solve the current problem:
> > > > > >
> > > > > >Caused by: org.apache.commons.logging.LogConfigurationException:
> > > > > >org.apache.commons.logging.LogConfigurationException:
> > > > > >org.apache.commons.logging.LogConfigurationException: Class
> > > > > >org.apache.commons.logging.impl.Log4JLogger does not implement
> > > > > >Log
> > > > > >
> > > > > >Does anyone have a clue there?
> > > > > >
> > > > > >Regards
> > > > > >
> > > > > >Andreas
> > > >
> > > > --
> > > > Achim Felber
> > > > e-mail: afelber@austin.rr.com
> > > >
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
> > > > For additional commands, e-mail: commons-user-help@jakarta.apache.org
> > > >
> > >
> > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
> > > For additional commands, e-mail: commons-user-help@jakarta.apache.org
> > >
> >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: commons-user-help@jakarta.apache.org
> >
> >
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: commons-user-help@jakarta.apache.org
> 



Resolved: Re: [logging] LogConfigurationException

Posted by Andreas Probst <an...@gmx.net>.
On 4 Jun 2003 at 23:41, Craig R. McClanahan wrote:

> PS:  The exception message described in the message below is *absolutely*
> and *positively* related to screwing up the class loader hierarchy of the
> test environment, and has nothing to do with commons-logging as such
> (unless they are using a verson of C-L before 1.0.3, all of which still
> had some class loader issues).  Most likely, the tester has exposed the
> C-L classes in more than one class loader (a class "foo.bar" loaded from
> class loader A is not considered to be the same as class "foo.bar" loaded
> from class loader B -- even if the bytecodes happen to be the same) --
> that is pretty much guaranteed to cause fatal problems, no matter what
> particular classes you are trying to load.
> 
> PPS: Specific note to Andreas -- make sure that you are using Commons
> Logging 1.0.3, and also make sure that you are executing your tests in a
> separate JVM so that you don't get messed up by the CLASSPATH that was
> specified on your Ant execution.
> 

The problem is within the UI test runners of JUnit. They bring 
with them a custom classloader, which causes the 
LogConfigurationException. Unfortunately Log4j doesn't work 
either.

Solution: Disable "Reload classes every run" or start JUnit with 
command line option -noloading before the name of the Testsuite.

Andreas
> >
> > On 4 Jun 2003 at 8:03, Andreas Probst wrote:
> >
> > > Thank you both Achim and Anthony for your replies.
> > >
> > > I still can't get it to work. Achim, I tried your code - and it
> > > works. Then I commented the junit-call out of the main method -
> > > and it works.
> > >
> > >     public static void main(String[] args)
> > >     {
> > >         //BasicConfigurator.configure();
> > >         logger.debug("test");
> > >         junit.swingui.TestRunner.run(AllTests.class);
> > >     }
> > >
> > > If I run the code above, the "test" gets logged as expected, but
> > > after that the exception below is thrown.
> > >
> > > Is there a known issue with JUnit 3.7?
> > >
> > > log4j: Finished configuring.
> > > 04 Jun 2003 07:50:21,122 DEBUG AllTests: test
> > > Exception in thread "main" java.lang.ExceptionInInitializerError
> > >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> > > Method)
> > >         at
> > > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorI
> > > mpl.java:39)
> > >         at
> > > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodA
> > > ccessorImpl.java:25)
> > >         at java.lang.reflect.Method.invoke(Method.java:324)
> > >         at
> > > junit.runner.BaseTestRunner.getTest(BaseTestRunner.java:111)
> > >         at junit.awtui.TestRunner.runSuite(TestRunner.java:455)
> > >         at junit.awtui.TestRunner.start(TestRunner.java:536)
> > >         at junit.awtui.TestRunner.main(TestRunner.java:382)
> > >         at junit.awtui.TestRunner.run(TestRunner.java:387)
> > >         at de...AllTests.main(AllTests.java:29)
> > > Caused by: org.apache.commons.logging.LogConfigurationException:
> > > org.apache.commons.logging.LogConfigurationException:
> > > org.apache.commons.logging.LogConfigurationException: Class
> > > org.apache.commons.logging.impl.Log4JLogger does not implement
> > > Log
> > >         at
> > > org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFac
> > > toryImpl.java:532)
> > >         at
> > > org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFac
> > > toryImpl.java:272)
> > >         at
> > > org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFac
> > > toryImpl.java:246)
> > >         at
> > > org.apache.commons.logging.LogFactory.getLog(LogFactory.java:395)
> > >         at de...AllTests.<clinit>(AllTests.java:21)
> > >         ... 10 more
> > > Caused by: org.apache.commons.logging.LogConfigurationException:
> > > org.apache.commons.logging.LogConfigurationException: Class
> > > org.apache.commons.logging.impl.Log4JLogger does not implement
> > > Log
> > >         at
> > > org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(
> > > LogFactoryImpl.java:416)
> > >         at
> > > org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFac
> > > toryImpl.java:525)
> > >         ... 14 more
> > > Caused by: org.apache.commons.logging.LogConfigurationException:
> > > Class org.apache.commons.logging.impl.Log4JLogger does not
> > > implement Log
> > >         at
> > > org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(
> > > LogFactoryImpl.java:412)
> > >         ... 15 more
> > >
> > > Andreas
> > >


Re: [logging] LogConfigurationException

Posted by "Craig R. McClanahan" <cr...@apache.org>.

On Thu, 5 Jun 2003, Andreas Probst wrote:

> Date: Thu, 05 Jun 2003 07:09:48 +0200
> From: Andreas Probst <an...@gmx.net>
> Reply-To: Jakarta Commons Users List <co...@jakarta.apache.org>
> To: Jakarta Commons Users List <co...@jakarta.apache.org>
> Subject: Re: [logging] LogConfigurationException
>
> Having read the two articles
>
> http://www.qos.ch/logging/thinkAgain.html
> https://secure.zdnet.com.au/builder/program/java/story/0,20000347
> 79,20272367 ,00.htm
>
> I think it's better not to use Commons-Logging in my app and
> certainly not within the test cases. Maybe I should use it in
> components which could be reused...
>
> Craig, what do you think about the first article?

I think two things about this article, and then have some additional
comments:

* Ceki Gulcu is an incredibly brilliant person, who (pretty
  much) created Log4J -- a very useful logging implementation
  that has some unique and powerful capabilities.  Those
  capabiliites have been created in direct response to requests
  from users, in the best traditions of open source software.

* Ceki Gulcu, as the primary author of Log4J, justifiably
  argues for its viability.  So anything he says about, say,
  JDK 1.4 logging, needs to be understood in that context.

However, the primary focus of these articles (a comparison of Log4J versus
JDK 1.4 logging) is pretty much irrelevant to the customers of most
Jakarta Commons libraries.  Why?  Because the Jakarta Commons libraries
that use commons-logging insulate applications using them from having to
choose one logging implementation or the other.  NOTE CAREFULLY that you
can easily use a Jakarta Commons package -- even one that uses logging --
secure in the knowledge that using that Commons package does not force
*you* to use the same logging implementation that the authors of that
particular library happened to pick.

Permit me to spend a few pargraphs venting on this topic ... :-)

The fundamental issue that causes grief, here in the Commons world, has
nothing to do with Ceki, or even with Log4J (or even any argument about
whether Log4J or JDK 1.4 logging is technologically superior).  The real
problem is that some people expect that commons-logging is something that
it is not.

The really important issue:  The commons-logging package ***IS*** a facade
around a variety of logging implementations, so that a library using it
(say, commons-digester just as an example) does NOT dictate the logging
implementation that an application using Digester has to use.  That is the
one and only purpose for which it was created.  And the fact that the
Digester developers made the choice to use C-L (disclaimer:  I'm one of
those developers :-) was a positive step towards making that package
reusable in more contexts, but not giving up the benefits that embedded
logging statements can bring.

The commons-logging package ***IS NOT*** a logging implemntation itself
(other than the fact that it includes a fallback SimpleLog implementation
which, in retrospect, might not have really been a good idea because of
the confusion that has resulted).  It expects that the underlying
application will have picked whatever logging implementation is
appropriate, and done its own configuration of that logging environment,
totally independently of commons-logging.

A clue that you get it -- you understand that changes in the underlying
logging implementation automatically affect libraries that use
commons-logging, without requiring *any* changes in those libraries.  The
only thing C-L knows is the *name* of a "logger", not where the output of
that logger is configured to go.

A clue that you don't get it -- you expect that o.a.c.logging.Log
instances should themsleves be configurable as independent entities (they
are supposed to be *invisible* facacdes around the actual logging
entities).  Or, you expect that commons-logging will provide an API to
transparently configure, and reconfigure, the underlying logging
implementation.  Or, that C-L should provide a mechanism to deal with
application-level facades (in spite of the fact that C-L is already a
facade, and goes to great lengths to hide itself).

That is not what commons-logging is for.

What commons-logging is for, is to allow a function library, such as a
library you might choose from Jakarta Commons, to be able to use logging
in its own implementation classes, *without* creating a prerequisite that
the application using that libarary must use the same logging
implementation that the library developers happened to like.

PLEASE DO use commons-logging in your applications where you don't want
your application to be directly tied to a particular logging
implementation.  As an example of this scenario, Struts (an MVC framework
for web applications) uses C-L so that it does *not* force an application
developer to choose whatever logging implementation that the Struts
developers happen to like.

PLEASE DO NOT use commons-logging on the theory that you will be able to
portably configure, for example, the "foo.bar" logger to be at DEBUG level
of detail, while the "foo.baz" logger is at the WARN level.  That is
outside the scope of commons-logging.  You are only going to end up
frustrated.

>
> Regards,
>
> Andreas
>

Craig

PS:  The exception message described in the message below is *absolutely*
and *positively* related to screwing up the class loader hierarchy of the
test environment, and has nothing to do with commons-logging as such
(unless they are using a verson of C-L before 1.0.3, all of which still
had some class loader issues).  Most likely, the tester has exposed the
C-L classes in more than one class loader (a class "foo.bar" loaded from
class loader A is not considered to be the same as class "foo.bar" loaded
from class loader B -- even if the bytecodes happen to be the same) --
that is pretty much guaranteed to cause fatal problems, no matter what
particular classes you are trying to load.

PPS: Specific note to Andreas -- make sure that you are using Commons
Logging 1.0.3, and also make sure that you are executing your tests in a
separate JVM so that you don't get messed up by the CLASSPATH that was
specified on your Ant execution.

>
> On 4 Jun 2003 at 8:03, Andreas Probst wrote:
>
> > Thank you both Achim and Anthony for your replies.
> >
> > I still can't get it to work. Achim, I tried your code - and it
> > works. Then I commented the junit-call out of the main method -
> > and it works.
> >
> >     public static void main(String[] args)
> >     {
> >         //BasicConfigurator.configure();
> >         logger.debug("test");
> >         junit.swingui.TestRunner.run(AllTests.class);
> >     }
> >
> > If I run the code above, the "test" gets logged as expected, but
> > after that the exception below is thrown.
> >
> > Is there a known issue with JUnit 3.7?
> >
> > log4j: Finished configuring.
> > 04 Jun 2003 07:50:21,122 DEBUG AllTests: test
> > Exception in thread "main" java.lang.ExceptionInInitializerError
> >         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
> > Method)
> >         at
> > sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorI
> > mpl.java:39)
> >         at
> > sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodA
> > ccessorImpl.java:25)
> >         at java.lang.reflect.Method.invoke(Method.java:324)
> >         at
> > junit.runner.BaseTestRunner.getTest(BaseTestRunner.java:111)
> >         at junit.awtui.TestRunner.runSuite(TestRunner.java:455)
> >         at junit.awtui.TestRunner.start(TestRunner.java:536)
> >         at junit.awtui.TestRunner.main(TestRunner.java:382)
> >         at junit.awtui.TestRunner.run(TestRunner.java:387)
> >         at de...AllTests.main(AllTests.java:29)
> > Caused by: org.apache.commons.logging.LogConfigurationException:
> > org.apache.commons.logging.LogConfigurationException:
> > org.apache.commons.logging.LogConfigurationException: Class
> > org.apache.commons.logging.impl.Log4JLogger does not implement
> > Log
> >         at
> > org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFac
> > toryImpl.java:532)
> >         at
> > org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFac
> > toryImpl.java:272)
> >         at
> > org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFac
> > toryImpl.java:246)
> >         at
> > org.apache.commons.logging.LogFactory.getLog(LogFactory.java:395)
> >         at de...AllTests.<clinit>(AllTests.java:21)
> >         ... 10 more
> > Caused by: org.apache.commons.logging.LogConfigurationException:
> > org.apache.commons.logging.LogConfigurationException: Class
> > org.apache.commons.logging.impl.Log4JLogger does not implement
> > Log
> >         at
> > org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(
> > LogFactoryImpl.java:416)
> >         at
> > org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFac
> > toryImpl.java:525)
> >         ... 14 more
> > Caused by: org.apache.commons.logging.LogConfigurationException:
> > Class org.apache.commons.logging.impl.Log4JLogger does not
> > implement Log
> >         at
> > org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(
> > LogFactoryImpl.java:412)
> >         ... 15 more
> >
> > Andreas
> >
> > On 2 Jun 2003 at 22:50, Achim Felber wrote:
> >
> > > Andreas,
> > >
> > > I think Anthony might be right; you could be using incompatible versions
> > > of Log4J and commons-logging. Below is an application that worked for me.
> > > Make sure Log4J finds its configuration file. The debug switch can be quite
> > > helpfull too. ;-)
> > >
> > > Regards,
> > > Achim
> > > ===========================================================
> > > import org.apache.commons.logging.*;
> > > import org.apache.log4j.*;
> > >
> > > public class LogTest
> > > {
> > >   public static void main(String[] args)
> > >   {
> > >     LogFactory.getLog("TestLog").debug("Seems to work ...!");
> > >   }
> > > }
> > >
> > > // java -Dlog4j.debug=true -Dlog4j.configuration=file:/c:/data/sort4.properties LogTest
> > >
> > > On Mon, Jun 02, 2003 at 01:58:27PM -0400, Anthony Eden wrote:
> > > > You can also put the log4j.properties file in a location which is known
> > > > to Log4J.  Take a look at the Default Initialization Procedure section
> > > > in the short manual ( http://jakarta.apache.org/log4j/docs/manual.html )
> > > > for information on how Log4J attempts to find the properties file.
> > > >
> > > > As for the specific problem, make sure you are using both the current
> > > > version of Log4J and the current version of Commons Logging.  Also,
> > > > check your whole classpath for other copies of either Commons Logging or
> > > > Log4J.
> > > >
> > > > Sincerely,
> > > > Anthony Eden
> > > >
> > > > Andreas Probst wrote:
> > > > >Hi Achim,
> > > > >
> > > > >thank you for your answer. You're right - I would have to change
> > > > >the configuration only once. It's not what I expected, but OK, I
> > > > >think I can live with it.
> > > > >
> > > > >However, putting an
> > > > >BasicConfigurator.configure();
> > > > >into the main() method doesn't solve the current problem:
> > > > >
> > > > >Caused by: org.apache.commons.logging.LogConfigurationException:
> > > > >org.apache.commons.logging.LogConfigurationException:
> > > > >org.apache.commons.logging.LogConfigurationException: Class
> > > > >org.apache.commons.logging.impl.Log4JLogger does not implement
> > > > >Log
> > > > >
> > > > >Does anyone have a clue there?
> > > > >
> > > > >Regards
> > > > >
> > > > >Andreas
> > >
> > > --
> > > Achim Felber
> > > e-mail: afelber@austin.rr.com
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
> > > For additional commands, e-mail: commons-user-help@jakarta.apache.org
> > >
> >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: commons-user-help@jakarta.apache.org
> >
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: commons-user-help@jakarta.apache.org
>
>

Re: [logging] LogConfigurationException

Posted by Andreas Probst <an...@gmx.net>.
Having read the two articles 

http://www.qos.ch/logging/thinkAgain.html
https://secure.zdnet.com.au/builder/program/java/story/0,20000347
79,20272367 ,00.htm

I think it's better not to use Commons-Logging in my app and 
certainly not within the test cases. Maybe I should use it in 
components which could be reused...

Craig, what do you think about the first article?

Regards,

Andreas


On 4 Jun 2003 at 8:03, Andreas Probst wrote:

> Thank you both Achim and Anthony for your replies.
> 
> I still can't get it to work. Achim, I tried your code - and it 
> works. Then I commented the junit-call out of the main method - 
> and it works.
> 
>     public static void main(String[] args)
>     {
>         //BasicConfigurator.configure();
>         logger.debug("test");
>         junit.swingui.TestRunner.run(AllTests.class);
>     }
> 
> If I run the code above, the "test" gets logged as expected, but 
> after that the exception below is thrown.
> 
> Is there a known issue with JUnit 3.7?
> 
> log4j: Finished configuring.
> 04 Jun 2003 07:50:21,122 DEBUG AllTests: test
> Exception in thread "main" java.lang.ExceptionInInitializerError
>         at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method)
>         at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorI
> mpl.java:39)
>         at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodA
> ccessorImpl.java:25)
>         at java.lang.reflect.Method.invoke(Method.java:324)
>         at 
> junit.runner.BaseTestRunner.getTest(BaseTestRunner.java:111)
>         at junit.awtui.TestRunner.runSuite(TestRunner.java:455)
>         at junit.awtui.TestRunner.start(TestRunner.java:536)
>         at junit.awtui.TestRunner.main(TestRunner.java:382)
>         at junit.awtui.TestRunner.run(TestRunner.java:387)
>         at de...AllTests.main(AllTests.java:29)
> Caused by: org.apache.commons.logging.LogConfigurationException: 
> org.apache.commons.logging.LogConfigurationException: 
> org.apache.commons.logging.LogConfigurationException: Class 
> org.apache.commons.logging.impl.Log4JLogger does not implement 
> Log
>         at 
> org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFac
> toryImpl.java:532)
>         at 
> org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFac
> toryImpl.java:272)
>         at 
> org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFac
> toryImpl.java:246)
>         at 
> org.apache.commons.logging.LogFactory.getLog(LogFactory.java:395)
>         at de...AllTests.<clinit>(AllTests.java:21)
>         ... 10 more
> Caused by: org.apache.commons.logging.LogConfigurationException: 
> org.apache.commons.logging.LogConfigurationException: Class 
> org.apache.commons.logging.impl.Log4JLogger does not implement 
> Log
>         at 
> org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(
> LogFactoryImpl.java:416)
>         at 
> org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFac
> toryImpl.java:525)
>         ... 14 more
> Caused by: org.apache.commons.logging.LogConfigurationException: 
> Class org.apache.commons.logging.impl.Log4JLogger does not 
> implement Log
>         at 
> org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(
> LogFactoryImpl.java:412)
>         ... 15 more
> 
> Andreas
> 
> On 2 Jun 2003 at 22:50, Achim Felber wrote:
> 
> > Andreas,
> > 
> > I think Anthony might be right; you could be using incompatible versions
> > of Log4J and commons-logging. Below is an application that worked for me. 
> > Make sure Log4J finds its configuration file. The debug switch can be quite 
> > helpfull too. ;-)
> > 
> > Regards,
> > Achim
> > ===========================================================
> > import org.apache.commons.logging.*;
> > import org.apache.log4j.*;
> > 
> > public class LogTest
> > {
> >   public static void main(String[] args)
> >   {
> >     LogFactory.getLog("TestLog").debug("Seems to work ...!");
> >   }
> > }
> > 
> > // java -Dlog4j.debug=true -Dlog4j.configuration=file:/c:/data/sort4.properties LogTest
> > 
> > On Mon, Jun 02, 2003 at 01:58:27PM -0400, Anthony Eden wrote:
> > > You can also put the log4j.properties file in a location which is known 
> > > to Log4J.  Take a look at the Default Initialization Procedure section 
> > > in the short manual ( http://jakarta.apache.org/log4j/docs/manual.html ) 
> > > for information on how Log4J attempts to find the properties file.
> > > 
> > > As for the specific problem, make sure you are using both the current 
> > > version of Log4J and the current version of Commons Logging.  Also, 
> > > check your whole classpath for other copies of either Commons Logging or 
> > > Log4J.
> > > 
> > > Sincerely,
> > > Anthony Eden
> > > 
> > > Andreas Probst wrote:
> > > >Hi Achim,
> > > >
> > > >thank you for your answer. You're right - I would have to change 
> > > >the configuration only once. It's not what I expected, but OK, I 
> > > >think I can live with it.
> > > >
> > > >However, putting an 
> > > >BasicConfigurator.configure();
> > > >into the main() method doesn't solve the current problem:
> > > >
> > > >Caused by: org.apache.commons.logging.LogConfigurationException: 
> > > >org.apache.commons.logging.LogConfigurationException: 
> > > >org.apache.commons.logging.LogConfigurationException: Class 
> > > >org.apache.commons.logging.impl.Log4JLogger does not implement 
> > > >Log
> > > >
> > > >Does anyone have a clue there?
> > > >
> > > >Regards
> > > >
> > > >Andreas
> > 
> > -- 
> > Achim Felber
> > e-mail: afelber@austin.rr.com
> > 
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: commons-user-help@jakarta.apache.org
> > 
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: commons-user-help@jakarta.apache.org
> 



Re: [logging] LogConfigurationException

Posted by Andreas Probst <an...@gmx.net>.
Thank you both Achim and Anthony for your replies.

I still can't get it to work. Achim, I tried your code - and it 
works. Then I commented the junit-call out of the main method - 
and it works.

    public static void main(String[] args)
    {
        //BasicConfigurator.configure();
        logger.debug("test");
        junit.swingui.TestRunner.run(AllTests.class);
    }

If I run the code above, the "test" gets logged as expected, but 
after that the exception below is thrown.

Is there a known issue with JUnit 3.7?

log4j: Finished configuring.
04 Jun 2003 07:50:21,122 DEBUG AllTests: test
Exception in thread "main" java.lang.ExceptionInInitializerError
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native 
Method)
        at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorI
mpl.java:39)
        at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodA
ccessorImpl.java:25)
        at java.lang.reflect.Method.invoke(Method.java:324)
        at 
junit.runner.BaseTestRunner.getTest(BaseTestRunner.java:111)
        at junit.awtui.TestRunner.runSuite(TestRunner.java:455)
        at junit.awtui.TestRunner.start(TestRunner.java:536)
        at junit.awtui.TestRunner.main(TestRunner.java:382)
        at junit.awtui.TestRunner.run(TestRunner.java:387)
        at de...AllTests.main(AllTests.java:29)
Caused by: org.apache.commons.logging.LogConfigurationException: 
org.apache.commons.logging.LogConfigurationException: 
org.apache.commons.logging.LogConfigurationException: Class 
org.apache.commons.logging.impl.Log4JLogger does not implement 
Log
        at 
org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFac
toryImpl.java:532)
        at 
org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFac
toryImpl.java:272)
        at 
org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFac
toryImpl.java:246)
        at 
org.apache.commons.logging.LogFactory.getLog(LogFactory.java:395)
        at de...AllTests.<clinit>(AllTests.java:21)
        ... 10 more
Caused by: org.apache.commons.logging.LogConfigurationException: 
org.apache.commons.logging.LogConfigurationException: Class 
org.apache.commons.logging.impl.Log4JLogger does not implement 
Log
        at 
org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(
LogFactoryImpl.java:416)
        at 
org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFac
toryImpl.java:525)
        ... 14 more
Caused by: org.apache.commons.logging.LogConfigurationException: 
Class org.apache.commons.logging.impl.Log4JLogger does not 
implement Log
        at 
org.apache.commons.logging.impl.LogFactoryImpl.getLogConstructor(
LogFactoryImpl.java:412)
        ... 15 more

Andreas

On 2 Jun 2003 at 22:50, Achim Felber wrote:

> Andreas,
> 
> I think Anthony might be right; you could be using incompatible versions
> of Log4J and commons-logging. Below is an application that worked for me. 
> Make sure Log4J finds its configuration file. The debug switch can be quite 
> helpfull too. ;-)
> 
> Regards,
> Achim
> ===========================================================
> import org.apache.commons.logging.*;
> import org.apache.log4j.*;
> 
> public class LogTest
> {
>   public static void main(String[] args)
>   {
>     LogFactory.getLog("TestLog").debug("Seems to work ...!");
>   }
> }
> 
> // java -Dlog4j.debug=true -Dlog4j.configuration=file:/c:/data/sort4.properties LogTest
> 
> On Mon, Jun 02, 2003 at 01:58:27PM -0400, Anthony Eden wrote:
> > You can also put the log4j.properties file in a location which is known 
> > to Log4J.  Take a look at the Default Initialization Procedure section 
> > in the short manual ( http://jakarta.apache.org/log4j/docs/manual.html ) 
> > for information on how Log4J attempts to find the properties file.
> > 
> > As for the specific problem, make sure you are using both the current 
> > version of Log4J and the current version of Commons Logging.  Also, 
> > check your whole classpath for other copies of either Commons Logging or 
> > Log4J.
> > 
> > Sincerely,
> > Anthony Eden
> > 
> > Andreas Probst wrote:
> > >Hi Achim,
> > >
> > >thank you for your answer. You're right - I would have to change 
> > >the configuration only once. It's not what I expected, but OK, I 
> > >think I can live with it.
> > >
> > >However, putting an 
> > >BasicConfigurator.configure();
> > >into the main() method doesn't solve the current problem:
> > >
> > >Caused by: org.apache.commons.logging.LogConfigurationException: 
> > >org.apache.commons.logging.LogConfigurationException: 
> > >org.apache.commons.logging.LogConfigurationException: Class 
> > >org.apache.commons.logging.impl.Log4JLogger does not implement 
> > >Log
> > >
> > >Does anyone have a clue there?
> > >
> > >Regards
> > >
> > >Andreas
> 
> -- 
> Achim Felber
> e-mail: afelber@austin.rr.com
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: commons-user-help@jakarta.apache.org
> 



Re: [logging] LogConfigurationException

Posted by Achim Felber <af...@austin.rr.com>.
Andreas,

I think Anthony might be right; you could be using incompatible versions
of Log4J and commons-logging. Below is an application that worked for me. 
Make sure Log4J finds its configuration file. The debug switch can be quite 
helpfull too. ;-)

Regards,
Achim
===========================================================
import org.apache.commons.logging.*;
import org.apache.log4j.*;

public class LogTest
{
  public static void main(String[] args)
  {
    LogFactory.getLog("TestLog").debug("Seems to work ...!");
  }
}

// java -Dlog4j.debug=true -Dlog4j.configuration=file:/c:/data/sort4.properties LogTest

On Mon, Jun 02, 2003 at 01:58:27PM -0400, Anthony Eden wrote:
> You can also put the log4j.properties file in a location which is known 
> to Log4J.  Take a look at the Default Initialization Procedure section 
> in the short manual ( http://jakarta.apache.org/log4j/docs/manual.html ) 
> for information on how Log4J attempts to find the properties file.
> 
> As for the specific problem, make sure you are using both the current 
> version of Log4J and the current version of Commons Logging.  Also, 
> check your whole classpath for other copies of either Commons Logging or 
> Log4J.
> 
> Sincerely,
> Anthony Eden
> 
> Andreas Probst wrote:
> >Hi Achim,
> >
> >thank you for your answer. You're right - I would have to change 
> >the configuration only once. It's not what I expected, but OK, I 
> >think I can live with it.
> >
> >However, putting an 
> >BasicConfigurator.configure();
> >into the main() method doesn't solve the current problem:
> >
> >Caused by: org.apache.commons.logging.LogConfigurationException: 
> >org.apache.commons.logging.LogConfigurationException: 
> >org.apache.commons.logging.LogConfigurationException: Class 
> >org.apache.commons.logging.impl.Log4JLogger does not implement 
> >Log
> >
> >Does anyone have a clue there?
> >
> >Regards
> >
> >Andreas

-- 
Achim Felber
e-mail: afelber@austin.rr.com

Re: [logging] LogConfigurationException

Posted by Anthony Eden <me...@anthonyeden.com>.
You can also put the log4j.properties file in a location which is known 
to Log4J.  Take a look at the Default Initialization Procedure section 
in the short manual ( http://jakarta.apache.org/log4j/docs/manual.html ) 
for information on how Log4J attempts to find the properties file.

As for the specific problem, make sure you are using both the current 
version of Log4J and the current version of Commons Logging.  Also, 
check your whole classpath for other copies of either Commons Logging or 
Log4J.

Sincerely,
Anthony Eden

Andreas Probst wrote:
> Hi Achim,
> 
> thank you for your answer. You're right - I would have to change 
> the configuration only once. It's not what I expected, but OK, I 
> think I can live with it.
> 
> However, putting an 
> BasicConfigurator.configure();
> into the main() method doesn't solve the current problem:
> 
> Caused by: org.apache.commons.logging.LogConfigurationException: 
> org.apache.commons.logging.LogConfigurationException: 
> org.apache.commons.logging.LogConfigurationException: Class 
> org.apache.commons.logging.impl.Log4JLogger does not implement 
> Log
> 
> Does anyone have a clue there?
> 
> Regards
> 
> Andreas
> 
> 
> On 1 Jun 2003 at 18:26, Achim Felber wrote:
> 
> 
>>Andreas,
>>
>>I think you need to configure Log4J yourself. At the very least 
>>you need to tell it where to find the log4j.properties file for
>>instance by setting the log4j.configuration system property.
>>
>>I could be wrong but, I think commons-logging is supposed to 
>>only provide a generic interface for the actual logging calls.
>>The configuration of the underlying logging package doesn't seem
>>to be within the scope of commons-logging. So, when you change
>>the actual logger, for instance convert from Log4J to the JDK1.4 logger 
>>you don't have to change every class, only the one which configures the
>>logger. Your actual application code stays the same.
>>
>>Hope this helps,
>>Achim
>>
>>On Sun, Jun 01, 2003 at 06:42:27PM +0200, Andreas Probst wrote:
>>
>>>Hi all,
>>>
>>>I'm trying to use commons-logging together with log4j.
>>>
>>>Do I have to initialise log4j, i.e. load the log4j.properties? I 
>>>don't think so. If I have to configure it myself, I'm tied to 
>>>log4j. Then I could use it directly.
>>>
>>>At the moment I get 
>>>
>>>Caused by: org.apache.commons.logging.LogConfigurationException: 
>>>org.apache.commons.logging.LogConfigurationException: 
>>>org.apache.commons.logging.LogConfigurationException: Class o
>>>g.apache.commons.logging.impl.Log4JLogger does not implement Log
>>>        at 
>>>org.apache.commons.logging.impl.LogFactoryImpl.newInstance(LogFac
>>>toryImpl.java:532)
>>>        at 
>>>org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFac
>>>toryImpl.java:272)
>>>        at 
>>>org.apache.commons.logging.impl.LogFactoryImpl.getInstance(LogFac
>>>toryImpl.java:246)
>>>        at 
>>>org.apache.commons.logging.LogFactory.getLog(LogFactory.java:395)
>>>
>>>I put the following code into my class:
>>>
>>>import org.apache.commons.logging.Log;
>>>import org.apache.commons.logging.LogFactory;
>>>static Log logger = LogFactory.getLog(AllTests.class); 
>>>
>>>Isn't this enough?
>>>
>>>>From reading the source code I think log4j.jar has to be in 
>>>classpath. Right?
>>>
>>>Thanks in advance for your help
>>>
>>>Andreas
>>>
>>>
>>>---------------------------------------------------------------------
>>>To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
>>>For additional commands, e-mail: commons-user-help@jakarta.apache.org
>>>
>>
>>---------------------------------------------------------------------
>>To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
>>For additional commands, e-mail: commons-user-help@jakarta.apache.org
>>
> 
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: commons-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: commons-user-help@jakarta.apache.org


Re: Link view goodness (Re: residuals of MIME type bug ?)

Posted by Nicola Ken Barozzi <ni...@apache.org>.
Upayavira wrote, On 28/06/2003 17.00:

> Nicola Ken, 
> 
>>Sorry Jeff, but I don't have time nor energy to dwelve into this
>>discussion further. I'm getting a bit tired about it too. Don't get me
>>wrong, it's not about you, it's just that sometimes one loses interest
>>in some things.
> 
> FWIW, I'm pleased that Jeff is prepared to go along with these discussions - I think 
> our original discussions only went so far. We got it down to one pass, and were pretty 
> happy - we didn't really engage further with what the real consequences of that were, 
> and what we potentially lost. And I think because we didn't do this, we haven't brought 
> the rest of the Forrest community along with us. But now it is happening, which can 
> only be good.

In two, things can only go so far. Jeff has finally brought the stuff in 
"the real world (TM)" and highlighted things we did not think about.

I'm happy that you're here to work with him on this now.
What you have described is what I mean to say too. :-)

-- 
Nicola Ken Barozzi                   nicolaken@apache.org
             - verba volant, scripta manent -
    (discussions get forgotten, just code remains)
---------------------------------------------------------------------



Re: Link view goodness (Re: residuals of MIME type bug ?)

Posted by Upayavira <uv...@upaya.co.uk>.
Nicola Ken, 

> Sorry Jeff, but I don't have time nor energy to dwelve into this
> discussion further. I'm getting a bit tired about it too. Don't get me
> wrong, it's not about you, it's just that sometimes one loses interest
> in some things.

FWIW, I'm pleased that Jeff is prepared to go along with these discussions - I think 
our original discussions only went so far. We got it down to one pass, and were pretty 
happy - we didn't really engage further with what the real consequences of that were, 
and what we potentially lost. And I think because we didn't do this, we haven't brought 
the rest of the Forrest community along with us. But now it is happening, which can 
only be good.

> First of all, all I want is speed and less memory usage. At least the
> same speed we are now getting with the new CLI. If any alternative
> scheme can be devised to get to comparable speed and possibly memory
> usage, I'm *completely* fine with it. IIUC What came out of the
> initial "new CLI" dicussion is that a single pass can be regarded as
> both technically and conceptually better.

I agree entirely.

> Secondly, it seems to me that you are mixing conceptual decisions with
> what are in fact just implementation "details". Things you have
> pointed out, like fixed gatherer position for example, are just fruit
> of the initial implementation, not a thorough and important design
> decision, and thus have still to be improved upon and testes in the
> real world (for us it's Forrest). 

Exactly. And that is the point we are now coming to - we can move the link gathering 
stage where ever we like without slowing down the CLI. In fact, having an explicit one 
will speed things up, as we'll get rid of its use on cocoon: protocol pipelines.

> Finally, the new CLI is a WIP, so I applaud your effort in getting it
> better, so that it does not throw out the baby (link view) with the
> water (3 pass generation). I'm trying to see that in this process also
> the new features of the CLI (one pass gathering) are not thrown out
> themselves in the process of saving the baby ;-)

With the idea of two link gathering transformers, both of which can be placed 
anywhere in a pipeline, one which extracts hrefs and xlinks (as does the current link 
gatherer) and one which consumes a links namespace (which allows complete 
control over your followed links, just like the links view), I think we've got the best of 
both worlds. An empty bath with a baby in it :-)

Regards, Upayavira


Re: Link view goodness (Re: residuals of MIME type bug ?)

Posted by Nicola Ken Barozzi <ni...@apache.org>.
Jeff Turner wrote, On 28/06/2003 3.59:
...
> I hope I've convinced you :)  Certainly for simpler needs, hardcoding
> a LinkGathererTransformer is fine, but in general (and I hope where
> Forrest is going) we need the full power of a link view.

Sorry Jeff, but I don't have time nor energy to dwelve into this 
discussion further. I'm getting a bit tired about it too.
Don't get me wrong, it's not about you, it's just that sometimes one 
loses interest in some things.

So please excuse me if I won't reply to your points and I just only 
present MHO.

First of all, all I want is speed and less memory usage. At least the 
same speed we are now getting with the new CLI. If any alternative 
scheme can be devised to get to comparable speed and possibly memory 
usage, I'm *completely* fine with it.
IIUC What came out of the initial "new CLI" dicussion is that a single 
pass can be regarded as both technically and conceptually better.

Secondly, it seems to me that you are mixing conceptual decisions with 
what are in fact just implementation "details".
Things you have pointed out, like fixed gatherer position for example, 
are just fruit of the initial implementation, not a thorough and 
important design decision, and thus have still to be improved upon and 
testes in the real world (for us it's Forrest).
This is the sole reason why I ask you to read that thread. It explains 
the design decisions, and can help you in not necessarily 
re-investigating stuff that has already been fruitfully discussed.

Finally, the new CLI is a WIP, so I applaud your effort in getting it 
better, so that it does not throw out the baby (link view) with the 
water (3 pass generation).
I'm trying to see that in this process also the new features of the CLI 
(one pass gathering) are not thrown out themselves in the process of 
saving the baby ;-)

Ciao :-)

-- 
Nicola Ken Barozzi                   nicolaken@apache.org
             - verba volant, scripta manent -
    (discussions get forgotten, just code remains)
---------------------------------------------------------------------



Re: Link view goodness (Re: residuals of MIME type bug ?)

Posted by Upayavira <uv...@upaya.co.uk>.
On 1 Jul 2003 at 14:47, Vadim Gritsenko wrote:

> Jeff Turner wrote:
> 
> >I'm not very familiar with the code; is there some cost in keeping
> >the two-pass CLI alive, in the faint hope that caching comes to its
> >rescue one day?
> >
> 
> Guys,
> 
> Before you implement some approach here... Let me suggest something.
> 
> Right now sitemap implementation automatically adds link gatherer to
> the pipeline when it is invoked by CLI. This link gatherer is in fact
> is "hard-coded links view". I suggest to replace this "hard-coded
> links view" a.k.a link gatherer with the "real" links view, BUT attach
> it as a tee to a main pipeline instead of running it as a pipeline by
> itself. As a result, links view "baby" will be used, two-pass "water"
> will be drained, and sitemap syntax will stay the same. Moreover, the
> links view will be still accessible from the outside, meaning that you
> can spider the site using out-of-the-process spiders.
> 
> Example:
> Given the pipeline:
>   G --> T1 (label="content") --> T2 --> S,
> 
> And the links view:
>   from-label="content" --> T3 --> LinkSerializer,
> 
> The pipeline built for the CLI request should be:
>   G --> T1 --> Tee --> T2 --> S --> OutputStream
>                  \
>                    --> LinkSerializer --> NullOutputStream
>                            \
>                              --> List of links in environment
> 
> In one request, you will get:
>  * Regular output of the pipeline which will go to the destination
>  Source * List of links in the environment which is what link gatherer
>  was made for

Splendid. I think that is exactly what I would want to do. We'd then have single(ish) 
pass generation with the benefits of link view. And if you just feed directly from the 
label into a serializer, it'll be pretty much the same in terms of performance as the 
LinkGatherer that we have now.

I would need help implementing this. Are you able to explain how?

There's a lot of pipeline building there that I wouldn't yet know how to do (but I'm 
willing to give it a go with guidance).

If we're to use my current approach, we'd add a different serializer at the end of the 
second sub-pipe, which would take the links and put them into a specific List in the 
ObjectModel. In fact, we could create a LinkGatheringOutputStream that'd be handed 
to the LinkSerializer to do that. That would leave most of the complexity simply in 
building the pipeline.

Can you guarantee that cocoon.process() will not complete until both sub-pipelines 
have completed their work?

I'll take a bit of a look into the pipeline building code (if I can find it) to see what I can 
work out.

This approach excites me. With help, I'd like to see if I can make it happen.

Regards, Upayavira




Re: Aspect-based pipelines and link view ( Re: Link view goodness)

Posted by Nicola Ken Barozzi <ni...@apache.org>.
Upayavira wrote, On 30/06/2003 18.18:
> Guys,
> 
> 
>>The link stuff is a cross-cutting concern. This thread has IMHO shown
>>how aspects can be easily added to the sitemap, and effectively used.
>>Let's see...
...
>>><map:resource name="gather-links" from-position="content">
>>>  <!-- Any required link munging -->
>>>  <map:transformer type="gather-links"/>
>>></map:resource>
...
>>Seems like simply adding this capability to resources is nice. We
>>could similarly make a link-view that uses the same transformer and a
>>serializer. In this way it could also be compatible with the 3pass
>>method. Hmmm...
>>
>>>Ie, a Resource inserted in each pipeline after the 'content' label.
>>>Rather AOP'ish.
>>
>>Yup. As link gathering is a cross-cutting concern, it also makes sense
>>conceptually.
> 
> So you're saying, with a resource that has a 'from-position' attribute, that specifies 
> after which label it should be inserted? That makes sense. So you only have to have 
> the resource once per sitemap, rather than having to insert it into every pipeline. 

Exactly.

As you say, it automatically gets inserted in each place where I would 
have to put it manually.

So it's not part of the single pipelines, but it's a common "aspect" of 
the system that gets applied.

For example, if I wanted to log all method exits, I could put by hand a 
log() call in each method. Or I could factor out this aspect and specify 
with a rule where it applies, like saying:

  foreach(method.return){
   log()
  }

The important thing here is how I can tell the system about where to 
apply that code. It's important that it can be explained as a common 
rule, or else I'll be just moving the single log calls out of the 
methods, with no gain.

Here, linking has to be applied to a *group* of pipelines, in a specific 
*part* of them.
IE: "for xml pipelines, where the content is done, insert this transformer".

> But - what if the pipeline itself needs modifying to expose links from within PDFs for 
> example. The LinkGatheringTransformer I have coded has two modes, one where it 
> just hunts for href, src and xlink attributes, and the other that searches for attributes 
> in the http://apache.org/cocoon/link-gatherer/1.0 namespace (probably to be used 
> with a 'link' prefix). This latter kind is required for gathering links that don't conform to 
> the href, src or xlink conventions. Just auto-inserting a link gatherer wouldn't work in 
> this case.

It has to be possible to define when one or the other apply.
Examples of the two usages?

...
>>Now we say: "when the view is triggered, start at a label"
>>After it could be:  "when the view is triggered, start at position"
>>Instead we need: "when the position is met, check if it has to be
>>triggered".
>>
>>Here is an example that uses this "inverted" AOPish system for views.
>>
>>The following adds two aspects:
>>  - an aspect gets called from every content position and gathers
>>  links. - the other one gets called from every content position. If
>>  the 
>>request has a cocoon-view=links, then the links are serialized.
>>
>><map:aspects>
>>  <map:aspect type="from-label" test="content">
>>    <!-- Any required link munging -->
>>    <map:transformer type="gather-links"/>
>>  </map:aspect>
>>  <map:aspect type="from-label" test="content">
>>     <map:action type="request-param">
>>       <map:param name="cocoon-view" value="link">
>>       <map:serializer type="links"/>
>>     </map:action>
>>  </map:aspect>
>></map:aspects>
>>
>>This would make it very easy to add security-based checks, logging, or
>>any other stuff.
>>
>><map:aspects>
>>  <map:aspect type="pipeline" test="start">
>>    <map:action type="check-security"/>
>>  </map:aspect>
>>  <map:aspect type="pipeline" test="all">
>>    <map:transformer type="logger"/>
>>  </map:aspect>
>>  <map:aspect type="error" test="all">
>>    <map:action type="notify-admin"/>
>>  </map:aspect>
>></map:aspects>
>>
>>What do others think?
...
> 
> I'm afraid you left me completely behind there. I've not really yet understood what 
> AOP is, and your ideas go far further than my Cocoon implementation skills currently 
> allow.

It's quite easy once you go behind the words...

> I'd quite like to find something that can be implemented reasonably short term, and 
> then explore these more far-reaching ideas as time passes (and as the size and 
> capacity of my brain increases).
> 
> Are you guys interested for the time being in a LinkGatheringTransformer as 
> described above? Or is there something not too far away that we can do now to 
> gather links?

  - Upayavira sanity check System -
  - please wait... -
  - ... -

  - control NicolaKen ...
  - in progress...

  *** ALERT ALERT ***
  *** Highly volatile thoughts ***
  *** ALERT ALERT ***

Ok, I got the message ;-)

Thanks for bringing me back to earth, I have the ?slight? ;-) tendency 
of flying high.

Let's see what we need to resolve for this case.

We said that the current gatherer has the problem of being fixed.
We have seen that a transformer placed in the right place can be more 
configurable.

So, the reasonable solution that comes from this is that if the 
user-defined pipeline is caching links, those are used.
If not, Cocoon inserts a gatherer in the right position.

This mixes ease of use with configurability right now.

But then we need amore definitive solution.
Without calling them aspects, let's call them inserts for now.
They are rules that make Cocoon insert resources or pipelines give 
particular rules.

For example, let's say that we want Cocoon to add a 
LinkGatheringTransformer at the end of each pipeline.

   <map:insert type="index" location="last">
     <map:transformer type="gather-links"/>
   </map:insert>

Let's say we want to add it at the beginning instead:

   <map:insert type="index" location="last">
     <map:transformer type="gather-links"/>
   </map:insert>

If we want to add after the label content:

   <map:insert type="label" location="content">
     <map:transformer type="gather-links"/>
   </map:insert>

What here is missing is the declaration of the inserts.
Like when calling transformers I have to define the Transformer type, 
here I have to define the Inserter type.

   <map:components>
     <map:inserters>
       <map:inserter name="label"
                     class="org.apache.cocoon.inserters.LabelInserter"/>
       <map:inserter name="index"
                     class="org.apache.cocoon.inserters.IndexInserter"/>
     </map:inserters>
   </map:components>

Where an Inserter is a Components that gets a Pipeline and a series of 
Pipeline Components and inserts those in the pipeline as loaction indicates.

   public void insert(Pipeline p,
                      PipelineComponent[] components,
                      String location){
     ...

   }

The sitemap would call all inserters on all pipelines prior to starting.

This though needs additions to our contracts. What about making a 
special pipeline that can be aspected?

   <map:pipes default="aspected">
     <map:pipe name="aspected" 
src="org.apache.cocoon.components.pipeline.impl.AspectedProcessingPipeline">
       <param name="label:content" value="call-resource:link-gatherer"/>
       <param name="index:first"   value="call-resource:link-gatherer"/>
     </map:pipe>
   </map:pipes>

Bleah :-P

Let's start with the automatic-or-Transformer thing for now, then 
eventually evolve out of it.

-- 
Nicola Ken Barozzi                   nicolaken@apache.org
             - verba volant, scripta manent -
    (discussions get forgotten, just code remains)
---------------------------------------------------------------------



Re: Aspect-based pipelines and link view ( Re: Link view goodness)

Posted by Upayavira <uv...@upaya.co.uk>.
Guys,

> The link stuff is a cross-cutting concern. This thread has IMHO shown
> how aspects can be easily added to the sitemap, and effectively used.
> Let's see...

> >>>- We're abusing the name 'transformer', since nothing is
> >>>transformed.
> >>> If we're really going to go this way, let's define a new sitemap
> >>> element, <map:link-gatherer/>.
> >>
> >>There are transformers that do not transform, it's not unusual,
> > 
> > I can't think of any others?
> 
> some OTOMH, maybe not 100% correct:
> 
> LogTransformer
> XMLFormTransformer
> WriteDOMSessionTransformer
> SourceWritingTransformer

Yup. Now, I've got an (untested) LinkGatheringTransformer ready and compiling. It 
shouldn't be much work to test it and get it going.

> > <snip links to stuff I mostly missed - thanks>
> > 
> >>"
> >>So basically we are adding a contract to the sitemap, by saying that
> >>each sitemap implementation has to provide a list of links if
> >>requested to (as seen above). "
> >>
> >>As you state, a Transformer does not feel right. In fact, a sitemap
> >>has now a new contract that it has to give links. The question is:
> >>how can it be made more versatile? Who can we tell the pipeline
> >>where we want the link gathering to occur?
> >>
> >>What about a named pipeline that is inserted by the link gatherer
> >>where it gets the links? What about using a spacial label to
> >>indicate where to gather links?

What is a named pipeline? How would the link gatherer (or rather the bean) 'insert a 
named pipeline'?

> > Hmm.. interesting.  Perhaps we just need to augment Resources a bit:
> > 
> > <map:resource name="gather-links" from-position="content">
> >   <!-- Any required link munging -->
> >   <map:transformer type="gather-links"/>
> > </map:resource>
> 
> Cool, you have put my words in code, adding that last bit that makes
> them worthwile :-) This really looks like some sort of final solution,
> intriguing.

So how does this work? I don't get it. You're specifying a resource, but  presumably 
you're going to still have to insert it somehow?

> Hmmm, it also has to do with the "named pipelines" thread, or the
> pipeline==reusable_component one that Stefano had started.
> 
> Seems like simply adding this capability to resources is nice. We
> could similarly make a link-view that uses the same transformer and a
> serializer. In this way it could also be compatible with the 3pass
> method. Hmmm...
> 
> > Ie, a Resource inserted in each pipeline after the 'content' label.
> > Rather AOP'ish.
> 
> Yup. As link gathering is a cross-cutting concern, it also makes sense
> conceptually.

So you're saying, with a resource that has a 'from-position' attribute, that specifies 
after which label it should be inserted? That makes sense. So you only have to have 
the resource once per sitemap, rather than having to insert it into every pipeline. 

But - what if the pipeline itself needs modifying to expose links from within PDFs for 
example. The LinkGatheringTransformer I have coded has two modes, one where it 
just hunts for href, src and xlink attributes, and the other that searches for attributes 
in the http://apache.org/cocoon/link-gatherer/1.0 namespace (probably to be used 
with a 'link' prefix). This latter kind is required for gathering links that don't conform to 
the href, src or xlink conventions. Just auto-inserting a link gatherer wouldn't work in 
this case.

> The thing here is how it's called by the sitemap engine. There is no
> explicit call in the pipelines, but instead a "from-position"
> attribute. It could easily have also a serializer, and in this way it
> would terminate all pipelines... but it's like fa view in this case...
> 
> But a view then would become an AOPish resource with a serializer
> called only on certain conditions. Let's call them aspects instead of
> resources.
> 
> So:
> - <pipelines> are called per request one after the other till the
> sitemap exits - <resources> are sitemap snippets called by the
> pipelines - <views> are exit points that get called at a particular
> label
>    (effectively a hard-wired AOP feature) by the sitemap
> 
> Pipelines and resources are effectively the same thing not that there
> is the cocoon protocol.
> 
> What remains is the <views> part, that has introduce pipeline-stage
> metadata, as a label. It's an aspect that gets called when that
> particular condition is met (I won't use AOP terminology that I
> personally don't yet like)
> 
> So we can generalize it, and add configurability to the view mechanism
> to specify other conditions.
> 
>    <map:view name="content" from-label="content">
>      <map:serialize type="xml"/>
>    </map:view>
> 
> becomes:
> 
>    <map:view name="content" type="from-label"
>                             test="content">
>      <map:serialize type="xml"/>
>    </map:view>
> 
> This makes it possible to make a different position where to start
> from...
> 
> What can also be made configurable is *when*, in which condition, it's
> triggered, but the logic has to be inverted.
> 
> Now we say: "when the view is triggered, start at a label"
> After it could be:  "when the view is triggered, start at position"
> Instead we need: "when the position is met, check if it has to be
> triggered".
> 
> Here is an example that uses this "inverted" AOPish system for views.
> 
> The following adds two aspects:
>   - an aspect gets called from every content position and gathers
>   links. - the other one gets called from every content position. If
>   the 
> request has a cocoon-view=links, then the links are serialized.
> 
> <map:aspects>
>   <map:aspect type="from-label" test="content">
>     <!-- Any required link munging -->
>     <map:transformer type="gather-links"/>
>   </map:aspect>
>   <map:aspect type="from-label" test="content">
>      <map:action type="request-param">
>        <map:param name="cocoon-view" value="link">
>        <map:serializer type="links"/>
>      </map:action>
>   </map:aspect>
> </map:aspects>
> 
> This would make it very easy to add security-based checks, logging, or
> any other stuff.
> 
> <map:aspects>
>   <map:aspect type="pipeline" test="start">
>     <map:action type="check-security"/>
>   </map:aspect>
>   <map:aspect type="pipeline" test="all">
>     <map:transformer type="logger"/>
>   </map:aspect>
>   <map:aspect type="error" test="all">
>     <map:action type="notify-admin"/>
>   </map:aspect>
> </map:aspects>
> 
> What do others think?
> 
> Is it already possible to do this today with other components and
> skillful pipeline writing? For doing it at the beginning or at the end
> of a request it's possible to have an entry-point pieline that has pre
> and post processing, but to add stuff *inside* other pipelines? I
> think it cannot be done today.

I'm afraid you left me completely behind there. I've not really yet understood what 
AOP is, and your ideas go far further than my Cocoon implementation skills currently 
allow.

I'd quite like to find something that can be implemented reasonably short term, and 
then explore these more far-reaching ideas as time passes (and as the size and 
capacity of my brain increases).

Are you guys interested for the time being in a LinkGatheringTransformer as 
described above? Or is there something not too far away that we can do now to 
gather links?

Regards, Upayavira


Aspect-based pipelines and link view ( Re: Link view goodness (Re: residuals of MIME type bug ?) )

Posted by Nicola Ken Barozzi <ni...@apache.org>.
The link stuff is a cross-cutting concern. This thread has IMHO shown 
how aspects can be easily added to the sitemap, and effectively used.
Let's see...

Jeff Turner wrote, On 29/06/2003 14.48:

> On Sun, Jun 29, 2003 at 11:34:01AM +0200, Nicola Ken Barozzi wrote:
> 
>>Jeff Turner wrote, On 29/06/2003 8.03:
> 
> ....
>>>- We're abusing the name 'transformer', since nothing is transformed.
>>> If we're really going to go this way, let's define a new sitemap
>>> element, <map:link-gatherer/>.
>>
>>There are transformers that do not transform, it's not unusual,
> 
> I can't think of any others?

some OTOMH, maybe not 100% correct:

LogTransformer
XMLFormTransformer
WriteDOMSessionTransformer
SourceWritingTransformer

> <snip links to stuff I mostly missed - thanks>
> 
>>"
>>So basically we are adding a contract to the sitemap, by saying that
>>each sitemap implementation has to provide a list of links if requested
>>to (as seen above).
>>"
>>
>>As you state, a Transformer does not feel right. In fact, a sitemap has 
>>now a new contract that it has to give links. The question is: how can 
>>it be made more versatile? Who can we tell the pipeline where we want 
>>the link gathering to occur?
>>
>>What about a named pipeline that is inserted by the link gatherer where 
>>it gets the links? What about using a spacial label to indicate where to 
>>gather links?
> 
> Hmm.. interesting.  Perhaps we just need to augment Resources a bit:
> 
> <map:resource name="gather-links" from-position="content">
>   <!-- Any required link munging -->
>   <map:transformer type="gather-links"/>
> </map:resource>

Cool, you have put my words in code, adding that last bit that makes 
them worthwile :-)
This really looks like some sort of final solution, intriguing.

Hmmm, it also has to do with the "named pipelines" thread, or the 
pipeline==reusable_component one that Stefano had started.

Seems like simply adding this capability to resources is nice. We could 
similarly make a link-view that uses the same transformer and a 
serializer. In this way it could also be compatible with the 3pass 
method. Hmmm...

> Ie, a Resource inserted in each pipeline after the 'content' label.
> Rather AOP'ish.

Yup. As link gathering is a cross-cutting concern, it also makes sense 
conceptually.

The thing here is how it's called by the sitemap engine. There is no 
explicit call in the pipelines, but instead a "from-position" attribute. 
It could easily have also a serializer, and in this way it would 
terminate all pipelines... but it's like a view in this case...

But a view then would become an AOPish resource with a serializer called 
only on certain conditions. Let's call them aspects instead of resources.

So:
- <pipelines> are called per request one after the other till the 
sitemap exits
- <resources> are sitemap snippets called by the pipelines
- <views> are exit points that get called at a particular label
   (effectively a hard-wired AOP feature) by the sitemap

Pipelines and resources are effectively the same thing not that there is 
the cocoon protocol.

What remains is the <views> part, that has introduce pipeline-stage 
metadata, as a label. It's an aspect that gets called when that 
particular condition is met (I won't use AOP terminology that I 
personally don't yet like)

So we can generalize it, and add configurability to the view mechanism 
to specify other conditions.

   <map:view name="content" from-label="content">
     <map:serialize type="xml"/>
   </map:view>

becomes:

   <map:view name="content" type="from-label"
                            test="content">
     <map:serialize type="xml"/>
   </map:view>

This makes it possible to make a different position where to start from...

What can also be made configurable is *when*, in which condition, it's 
triggered, but the logic has to be inverted.

Now we say: "when the view is triggered, start at a label"
After it could be:  "when the view is triggered, start at position"
Instead we need: "when the position is met, check if it has to be 
triggered".

Here is an example that uses this "inverted" AOPish system for views.

The following adds two aspects:
  - an aspect gets called from every content position and gathers links.
  - the other one gets called from every content position. If the 
request has a cocoon-view=links, then the links are serialized.

<map:aspects>
  <map:aspect type="from-label" test="content">
    <!-- Any required link munging -->
    <map:transformer type="gather-links"/>
  </map:aspect>
  <map:aspect type="from-label" test="content">
     <map:action type="request-param">
       <map:param name="cocoon-view" value="link">
       <map:serializer type="links"/>
     </map:action>
  </map:aspect>
</map:aspects>

This would make it very easy to add security-based checks, logging, or 
any other stuff.

<map:aspects>
  <map:aspect type="pipeline" test="start">
    <map:action type="check-security"/>
  </map:aspect>
  <map:aspect type="pipeline" test="all">
    <map:transformer type="logger"/>
  </map:aspect>
  <map:aspect type="error" test="all">
    <map:action type="notify-admin"/>
  </map:aspect>
</map:aspects>

What do others think?

Is it already possible to do this today with other components and 
skillful pipeline writing? For doing it at the beginning or at the end 
of a request it's possible to have an entry-point pieline that has pre 
and post processing, but to add stuff *inside* other pipelines? I think 
it cannot be done today.

>>Just food for thought.
> 
> Tasty..

I really appreciate it, and how you manage to work with me even when I'm 
a bit touchy. Thanks :-)

-- 
Nicola Ken Barozzi                   nicolaken@apache.org
             - verba volant, scripta manent -
    (discussions get forgotten, just code remains)
---------------------------------------------------------------------



Re: Link view goodness (Re: residuals of MIME type bug ?)

Posted by Jeff Turner <je...@apache.org>.
On Sun, Jun 29, 2003 at 11:34:01AM +0200, Nicola Ken Barozzi wrote:
> Jeff Turner wrote, On 29/06/2003 8.03:
...
> >- We're abusing the name 'transformer', since nothing is transformed.
> >  If we're really going to go this way, let's define a new sitemap
> >  element, <map:link-gatherer/>.
> 
> There are transformers that do not transform, it's not unusual,

I can't think of any others?

<snip links to stuff I mostly missed - thanks>

> "
> So basically we are adding a contract to the sitemap, by saying that
> each sitemap implementation has to provide a list of links if requested
> to (as seen above).
> "
> 
> As you state, a Transformer does not feel right. In fact, a sitemap has 
> now a new contract that it has to give links. The question is: how can 
> it be made more versatile? Who can we tell the pipeline where we want 
> the link gathering to occur?
> 
> What about a named pipeline that is inserted by the link gatherer where 
> it gets the links? What about using a spacial label to indicate where to 
> gather links?

Hmm.. interesting.  Perhaps we just need to augment Resources a bit:

<map:resource name="gather-links" from-position="content">
  <!-- Any required link munging -->
  <map:transformer type="gather-links"/>
</map:resource>

Ie, a Resource inserted in each pipeline after the 'content' label.
Rather AOP'ish.

> Just food for thought.

Tasty..

--Jeff

> -- 
> Nicola Ken Barozzi                   nicolaken@apache.org
>             - verba volant, scripta manent -
>    (discussions get forgotten, just code remains)
> ---------------------------------------------------------------------

Re: Link view goodness (Re: residuals of MIME type bug ?)

Posted by Nicola Ken Barozzi <ni...@apache.org>.

Jeff Turner wrote, On 29/06/2003 8.03:
...
> I still have the feeling that a link-gatherer transformer is mixing
> concerns a bit, and that two-pass is conceptually nicer:
> 
> - We're abusing the name 'transformer', since nothing is transformed.
>   If we're really going to go this way, let's define a new sitemap
>   element, <map:link-gatherer/>.

There are transformers that do not transform, it's not unusual, 
although, since the sitemap has a new contract on links (see at the 
bottom), it might make sense.

> - Link gathering is irrelevant for online situations, so we pay some
>   performance penalty having a link-gatherer transformer.  This
>   illustrates why I think it mixes concerns.

Exactly.

> - It's easy to forget to define a link-gatherer transformer for new
>   pipelines.  Link-view is cross-cutting and doesn't have this
>   problem.

Again, exactly.

> I'm not very familiar with the code; is there some cost in keeping the
> two-pass CLI alive, in the faint hope that caching comes to its rescue
> one day?

Actually it was three-pass.

http://marc.theaimsgroup.com/?l=xml-cocoon-dev&m=104013686220328&w=2

>>Thanks for engaging with me on this - I appreciate it.
> 
> 
> Thank _you_; an improved CLI will make Forrest significantly more
> usable.

For your pleasure, and of interested parties, the previous threads:

http://marc.theaimsgroup.com/?t=102725710300001&r=1&w=2
http://marc.theaimsgroup.com/?t=104013701500006&r=1&w=2
http://marc.theaimsgroup.com/?t=104609314900002&r=1&w=2
http://marc.theaimsgroup.com/?t=104887033400005&r=1&w=2

And a couple of mails:

http://marc.theaimsgroup.com/?l=xml-cocoon-dev&m=104610949203967&w=2
http://marc.theaimsgroup.com/?l=xml-cocoon-dev&m=104679840022563&w=2
http://marc.theaimsgroup.com/?l=xml-cocoon-dev&m=104687731531754&w=2

The last mail in particular explains the current new-CLI method:

"
So basically we are adding a contract to the sitemap, by saying that
each sitemap implementation has to provide a list of links if requested
to (as seen above).
"

As you state, a Transformer does not feel right. In fact, a sitemap has 
now a new contract that it has to give links. The question is: how can 
it be made more versatile? Who can we tell the pipeline where we want 
the link gathering to occur?

What about a named pipeline that is inserted by the link gatherer where 
it gets the links? What about using a spacial label to indicate where to 
gather links?

Just food for thought.

-- 
Nicola Ken Barozzi                   nicolaken@apache.org
             - verba volant, scripta manent -
    (discussions get forgotten, just code remains)
---------------------------------------------------------------------



Re: Link view goodness (Re: residuals of MIME type bug ?)

Posted by Vadim Gritsenko <va...@verizon.net>.
Jeff Turner wrote:

>I'm not very familiar with the code; is there some cost in keeping the
>two-pass CLI alive, in the faint hope that caching comes to its rescue
>one day?
>

Guys,

Before you implement some approach here... Let me suggest something.

Right now sitemap implementation automatically adds link gatherer to the 
pipeline when it is invoked by CLI. This link gatherer is in fact is 
"hard-coded links view". I suggest to replace this "hard-coded links 
view" a.k.a link gatherer with the "real" links view, BUT attach it as a 
tee to a main pipeline instead of running it as a pipeline by itself. As 
a result, links view "baby" will be used, two-pass "water" will be 
drained, and sitemap syntax will stay the same. Moreover, the links view 
will be still accessible from the outside, meaning that you can spider 
the site using out-of-the-process spiders.

Example:
Given the pipeline:
  G --> T1 (label="content") --> T2 --> S,

And the links view:
  from-label="content" --> T3 --> LinkSerializer,

The pipeline built for the CLI request should be:
  G --> T1 --> Tee --> T2 --> S --> OutputStream
                 \
                   --> LinkSerializer --> NullOutputStream
                           \
                             --> List of links in environment

In one request, you will get:
 * Regular output of the pipeline which will go to the destination Source
 * List of links in the environment which is what link gatherer was made for

Comments?

Vadim



Re: Link view goodness (Re: residuals of MIME type bug ?)

Posted by Jeff Turner <je...@apache.org>.
On Sat, Jun 28, 2003 at 03:38:55PM +0100, Upayavira wrote:
...
> Okay. How about defining a namespace <links:link href="xxxx"/> which
> gets consumed by the transformer, that way you choose in your
> previous XSLT which links you want to be spidered by presenting the
> links in that <links> namespace (and then repeat them for the sake
> of the output).

Sounds good.  So you mean, eg, transforming <a href="..."> into <a
href="..." link:href="...">, and the gather-links transformer uses
the link:href attribute?

...
> Now the only question that remains is whether to have an implicit
> gatherer if no explicit one is specified. I'd probably say no, as
> other discussions have erred away from hidden things like that.

+1

> I think that telling the sitemap where your links are is a pretty
> reasonable adjustment to your site. In fact, we could have two
> transformers - one that just looks for hrefs and xlinks, and another
> that uses a links namespace - the former would make it real easy to
> convert your site for spidering, and the latter providing a method
> to do complex link management.

+1, was just going to suggest that.

> Another question - do we still leave link view (two pass) link
> following in the CLI? Or does this method deprecate and thus replace
> it?

I still have the feeling that a link-gatherer transformer is mixing
concerns a bit, and that two-pass is conceptually nicer:

- We're abusing the name 'transformer', since nothing is transformed.
  If we're really going to go this way, let's define a new sitemap
  element, <map:link-gatherer/>.
- Link gathering is irrelevant for online situations, so we pay some
  performance penalty having a link-gatherer transformer.  This
  illustrates why I think it mixes concerns.
- It's easy to forget to define a link-gatherer transformer for new
  pipelines.  Link-view is cross-cutting and doesn't have this
  problem.

I'm not very familiar with the code; is there some cost in keeping the
two-pass CLI alive, in the faint hope that caching comes to its rescue
one day?

> Thanks for engaging with me on this - I appreciate it.

Thank _you_; an improved CLI will make Forrest significantly more
usable.

--Jeff

> Regards, Upayavira
> 

Re: Link view goodness (Re: residuals of MIME type bug ?)

Posted by Upayavira <uv...@upaya.co.uk>.
Jeff wrote:

> > So are you saying you can manage without the XSLT stage?
> 
> I'm not sure, perhaps you can advise.  In Forrest we filter the links
> to:
> 
>  - Remove API doc links
>  - Remove links to directories, which break the CLI
>  - Remove image links that have been hacked to work with FOP
> 
> 1) belongs in cli.xconf.  Perhaps the new CLI handles 2) better than
> the original.  I think 3) is obsolete, as LinkSerializer ignores
> XSL:FO-namespaced links anyway.
> 
> > Perhaps I should explain what I had in mind a bit more with that - I
> > guess I would call it a tee, a pipeline element with one input and
> > two outputs. The input is passed unchanged on through to the next
> > stage in the pipeline. But it is also passed through an XSLT before
> > links are gathered from it.
> 
> I'd call it a hack ;)  Why favour XSLT and not STX, or any other
> transformer?  What about XSLT parameters? etc.  If people need XSLT,
> let them use a link view.  I'd suggest just sticking with the basics:
> <map:transform type="gather-links"/>

Okay. How about defining a namespace <links:link href="xxxx"/> which gets 
consumed by the transformer, that way you choose in your previous XSLT which links 
you want to be spidered by presenting the links in that <links> namespace (and then 
repeat them for the sake of the output).

This would be an extremely simple transformer to write. Beyond writing the 
transformer, it would take a minimal amount (1/2 hour) of changes to the rest of the 
CLI.

> Which isn't a hack.  In fact it would be great for Forrest, because we
> only have a few matchers where links are relevant.  All the cocoon:
> and image pipelines could go without.

Yup.

> Also, it resolves another little dilemma I've had with link views. 
> It's all very well having the notion of a cross-cutting 'view', but
> there's no way to override the 'view' for a specific pipeline.  With
> an explicit gather-links transformer, one could have different link
> analysis for each pipeline.  A *.css pipeline could list @import's as
> links, for example.

Great.

> > > It certainly fixes the hard-wired'ness problem you mention above
> > > (that 'content' != XML before the serializer).
> > 
> > And it sounds as if it could be a trivial solution.
> 
> 'Solves' the cocoon: sub-pipeline problem too.

Yup.

Now the only question that remains is whether to have an implicit gatherer if no 
explicit one is specified. I'd probably say no, as other discussions have erred away 
from hidden things like that.

I think that telling the sitemap where your links are is a pretty reasonable adjustment 
to your site. In fact, we could have two transformers - one that just looks for hrefs and 
xlinks, and another that uses a links namespace - the former would make it real easy 
to convert your site for spidering, and the latter providing a method to do complex link 
management.

Another question - do we still leave link view (two pass) link following in the CLI? Or 
does this method deprecate and thus replace it?

Thanks for engaging with me on this - I appreciate it.

Regards, Upayavira




Type-aware Views (Re: Link view goodness)

Posted by Jeff Turner <je...@apache.org>.
On Sun, Jun 29, 2003 at 09:08:14PM +1200, Conal Tuohy wrote:
> Jeff Turner wrote:
> 
> > > That's an issue I've come up against too - it seems that views are
> > > still too "tangled" up with labels and can't cut across pipelines
> > > properly. At least, that's how I understand it - maybe I'm missing
> > > something?
> >
> > I think labels and Views are independent of each other.  You can have
> > a view defined with 'from-position', and not use labels.  Labels are
> > just generic markers, with nothing to say they're only useful for
> > defining views.
> 
> But with from-position you can have only "first" and "last" which is
> even more restrictive than labels. If you want to do anything very
> sophisticated don't you need labels?

Yes, labels and positions.  What else could there be?

> > Views give _every_ public URL in a sitemap an alternative form.  If
> > you only need an alternative form of some URLs, then that can be done
> > just as you've described above, with a request-param selector.
> 
> So ... I could just have use a RequestParamSelector to create my
> different views for the crawler? Damn!

I doubt it.  I was just describing when you'd want to use views at all.
The old CLI chose to use views, which means there's no option for
per-pipeline customization.

> My problem was that I wanted to use Lucene to index a "content" view of
> 2 different pipelines, one of them based on TEI and another on HTML. In
> the case of the TEI pipeline I didn't want to convert the TEI to HTML
> first and then produce a "content" view based on an HTML-ized view of
> the TEI - I wanted an indexable view of the TEI. This is the same issue
> as you mention below:
> 
> > The problem is that Views don't know the type of data they're
> > getting.  If we have a view with from-label="content", we know it's
> > content, but what _type_ of content?  What schema?  What
> > transformation can we apply to create a links-view of this content?
> 
> If you could create more than one view with the same name, then we
> could use labels to specify the schema:
> 
> e.g. 2 pipelines containing:
> ...
> <map:generate src="{1}.xml" label="tei"/>
> ...
> 
> and
> 
> <map:transform src="blah-to-html.xsl" label="html"/>
> 
> ... and 2 views called "content", one with from-label="tei" and the
> other with from-label="html".

Technically that's more or less the solution.  I think a cleaner way
of presenting it is to have one view that interprets different kinds
of data differently:

<map:view name="links" from-position="content">
  <map:select type="xml-type">
    <map:when test="html">
      <map:transform src="html2whatever.xsl"/>
    </map:when>
    <map:when test="tei">
      <map:transform src="tei2whatever.xsl"/>
    </map:when>
  </map:select>
</map:view>

So, treating 'type' as a property of a sitemap component, independent
of labels.  The xml-type selector would somehow discover the type of
XML emitted by its upstream component.


--Jeff

> Cheers
> 
> Con
> 

RE: Link view goodness (Re: residuals of MIME type bug ?)

Posted by Conal Tuohy <co...@paradise.net.nz>.
Jeff Turner wrote:

> > That's an issue I've come up against too - it seems that views are
> > still too "tangled" up with labels and can't cut across pipelines
> > properly. At least, that's how I understand it - maybe I'm missing
> > something?
>
> I think labels and Views are independent of each other.  You
> can have a
> view defined with 'from-position', and not use labels.
> Labels are just
> generic markers, with nothing to say they're only useful for defining
> views.

But with from-position you can have only "first" and "last" which is even
more restrictive than labels. If you want to do anything very sophisticated
don't you need labels?

> Views give _every_ public URL in a sitemap an alternative
> form.  If you
> only need an alternative form of some URLs, then that can be
> done just as
> you've described above, with a request-param selector.

So ... I could just have use a RequestParamSelector to create my different
views for the crawler? Damn!

My problem was that I wanted to use Lucene to index a "content" view of 2
different pipelines, one of them based on TEI and another on HTML. In the
case of the TEI pipeline I didn't want to convert the TEI to HTML first and
then produce a "content" view based on an HTML-ized view of the TEI - I
wanted an indexable view of the TEI. This is the same issue as you mention
below:

> The problem is that Views don't know the type of data they're getting.
> If we have a view with from-label="content", we know it's content, but
> what _type_ of content?  What schema?  What transformation
> can we apply
> to create a links-view of this content?

If you could create more than one view with the same name, then we could use
labels to specify the schema:

e.g. 2 pipelines containing:
...
<map:generate src="{1}.xml" label="tei"/>
...

and

<map:transform src="blah-to-html.xsl" label="html"/>

... and 2 views called "content", one with from-label="tei" and the other
with from-label="html".

Cheers

Con


Re: Link view goodness (Re: residuals of MIME type bug ?)

Posted by Jeff Turner <je...@apache.org>.
On Sun, Jun 29, 2003 at 05:36:45PM +1200, Conal Tuohy wrote:
> Jeff Turner wrote:
> 
> <snip/>
> 
> > Also, it resolves another little dilemma I've had with link views.
> > It's all very well having the notion of a cross-cutting 'view', but
> > there's no way to override the 'view' for a specific pipeline.  With
> > an explicit gather-links transformer, one could have different link
> > analysis for each pipeline.  A *.css pipeline could list @import's as
> > links, for example.
> 
> That's an issue I've come up against too - it seems that views are
> still too "tangled" up with labels and can't cut across pipelines
> properly. At least, that's how I understand it - maybe I'm missing
> something?

I think labels and Views are independent of each other.  You can have a
view defined with 'from-position', and not use labels.  Labels are just
generic markers, with nothing to say they're only useful for defining
views.

> For instance I couldn't see how to have 2 pipelines share a view (i.e.
> both support a view) unless the 2 pipelines had a common stage
> somewhere.
> 
> I've always wondered why views weren't implemented using a Selector?
> 
> <map:select type="view">
> 	<map:when test="links">
> 		<map:transform src="convert-format-X-to-links.xsl"/>
> 		<map:serialize type="links"/>
> 	</map:when>
> 	<map:when test="content">
> 		<map:transform src="convert-format-X-to-HTML-xsl"/>
> 		<map:serialize type="html"/>
> 	</map:when>
> </map:select>

Views give _every_ public URL in a sitemap an alternative form.  If you
only need an alternative form of some URLs, then that can be done just as
you've described above, with a request-param selector.

The problem is that Views don't know the type of data they're getting.
If we have a view with from-label="content", we know it's content, but
what _type_ of content?  What schema?  What transformation can we apply
to create a links-view of this content?

That's why I'm looking forward to Cocoon 4.0, which will have strongly
typed pipelines.  Then the links view can see what kind of content its
getting (say *.css), and apply an appropriate transformation to extract
links (@import'ed files).  Given the current release rate, Cocoon 4.0 is
due in early 2030.

--Jeff

> In this way different pipelines could have quite different views,
> without sharing a commonly-labelled component. I guess this is more
> verbose than the current approach, where the view transformations are
> attached by name using a label, but for some reason the label approach
> reminds me powerfully of GOTO.
> 
> Cheers
> 
> Con
> 

RE: Link view goodness (Re: residuals of MIME type bug ?)

Posted by Conal Tuohy <co...@paradise.net.nz>.
Jeff Turner wrote:

<snip/>

> Also, it resolves another little dilemma I've had with link
> views.  It's
> all very well having the notion of a cross-cutting 'view',
> but there's no
> way to override the 'view' for a specific pipeline.  With an explicit
> gather-links transformer, one could have different link
> analysis for each
> pipeline.  A *.css pipeline could list @import's as links,
> for example.

That's an issue I've come up against too - it seems that views are still too
"tangled" up with labels and can't cut across pipelines properly. At least,
that's how I understand it - maybe I'm missing something?

For instance I couldn't see how to have 2 pipelines share a view (i.e. both
support a view) unless the 2 pipelines had a common stage somewhere.

I've always wondered why views weren't implemented using a Selector?

<map:select type="view">
	<map:when test="links">
		<map:transform src="convert-format-X-to-links.xsl"/>
		<map:serialize type="links"/>
	</map:when>
	<map:when test="content">
		<map:transform src="convert-format-X-to-HTML-xsl"/>
		<map:serialize type="html"/>
	</map:when>
</map:select>

In this way different pipelines could have quite different views, without
sharing a commonly-labelled component. I guess this is more verbose than the
current approach, where the view transformations are attached by name using
a label, but for some reason the label approach reminds me powerfully of
GOTO.

Cheers

Con


Re: Link view goodness (Re: residuals of MIME type bug ?)

Posted by Jeff Turner <je...@apache.org>.
On Sat, Jun 28, 2003 at 11:07:45AM +0100, Upayavira wrote:
> On 28 Jun 2003 at 18:45, Jeff Turner wrote:
...
> > > <map:match pattern="page.html">
> > >   <map:generate src="page.xml"/>
> > >   <map:transform src="before-links.xsl"/>
> > >   <map:transform type="gather-links" src="identify-links.xsl"/>
> > >   <map:transform src="after-links.xsl"/> <map:serialize/>
> > > </map:match>
> > > 
> > > So there's no hidden link gatherer. And you've got a single xslt to
> > > filter, etc. Not specifying src="xxx" skips the xsl stage. The
> > > output of this xsl would be xml conforming to a predefined
> > > namespace.
> > 
> > Having eliminated the dont-follow-these-links use-case, I don't see a
> > use-case for XSLT transformations, so it simplifies to 
> > 
> > <map:transform type="gather-links"/>
> 
> So are you saying you can manage without the XSLT stage?

I'm not sure, perhaps you can advise.  In Forrest we filter the links to:

 - Remove API doc links
 - Remove links to directories, which break the CLI
 - Remove image links that have been hacked to work with FOP

1) belongs in cli.xconf.  Perhaps the new CLI handles 2) better than the
original.  I think 3) is obsolete, as LinkSerializer ignores
XSL:FO-namespaced links anyway.

> Perhaps I should explain what I had in mind a bit more with that - I
> guess I would call it a tee, a pipeline element with one input and two
> outputs. The input is passed unchanged on through to the next stage in
> the pipeline. But it is also passed through an XSLT before links are
> gathered from it.

I'd call it a hack ;)  Why favour XSLT and not STX, or any other
transformer?  What about XSLT parameters? etc.  If people need XSLT, let
them use a link view.  I'd suggest just sticking with the basics:

<map:transform type="gather-links"/>

Which isn't a hack.  In fact it would be great for Forrest, because we
only have a few matchers where links are relevant.  All the cocoon: and
image pipelines could go without.

Also, it resolves another little dilemma I've had with link views.  It's
all very well having the notion of a cross-cutting 'view', but there's no
way to override the 'view' for a specific pipeline.  With an explicit
gather-links transformer, one could have different link analysis for each
pipeline.  A *.css pipeline could list @import's as links, for example.

> > It certainly fixes the hard-wired'ness problem you mention above (that
> > 'content' != XML before the serializer).
> 
> And it sounds as if it could be a trivial solution.

'Solves' the cocoon: sub-pipeline problem too.

--Jeff

> 
> Upayavira

Re: Link view goodness (Re: residuals of MIME type bug ?)

Posted by Upayavira <uv...@upaya.co.uk>.
On 28 Jun 2003 at 18:45, Jeff Turner wrote:

> On Sat, Jun 28, 2003 at 07:29:49AM +0100, Upayavira wrote:
> > On 28 Jun 2003 at 11:59, Jeff Turner wrote:
> ...
> > Okay. For the CLI, the cli.xconf file is the equivalent of the
> > web.xml and the user agent. 
> > 
> > Now, normally the user agent requests a URI, and that's it. It is up
> > to the user agent as to what to do with that URI.
> 
> Oh I see.  Yep, makes sense that the 'user agent' be the one who
> decides whether or not to chase down links.
> 
> > Are you saying that you want to put the configuration as to where
> > pages should be placed into the sitemap?
> 
> No, that's the user agent's (CLI's) business.

Good.

> > Or an alternative would be to ask: can you always do your link view
> > with a single XSLT stage? If so:
> > 
> > <map:match pattern="page.html">
> >   <map:generate src="page.xml"/>
> >   <map:transform src="before-links.xsl"/>
> >   <map:transform type="gather-links" src="identify-links.xsl"/>
> >   <map:transform src="after-links.xsl"/> <map:serialize/>
> > </map:match>
> > 
> > So there's no hidden link gatherer. And you've got a single xslt to
> > filter, etc. Not specifying src="xxx" skips the xsl stage. The
> > output of this xsl would be xml conforming to a predefined
> > namespace.
> 
> Having eliminated the dont-follow-these-links use-case, I don't see a
> use-case for XSLT transformations, so it simplifies to 
> 
> <map:transform type="gather-links"/>

So are you saying you can manage without the XSLT stage? Perhaps I should 
explain what I had in mind a bit more with that - I guess I would call it a tee, a pipeline 
element with one input and two outputs. The input is passed unchanged on through 
to the next stage in the pipeline. But it is also passed through an XSLT before links 
are gathered from it.

Are you saying you can manage without this?

> It certainly fixes the hard-wired'ness problem you mention above (that
> 'content' != XML before the serializer).

And it sounds as if it could be a trivial solution.

Upayavira

Re: Link view goodness (Re: residuals of MIME type bug ?)

Posted by Jeff Turner <je...@apache.org>.
On Sat, Jun 28, 2003 at 07:29:49AM +0100, Upayavira wrote:
> On 28 Jun 2003 at 11:59, Jeff Turner wrote:
...
> Okay. For the CLI, the cli.xconf file is the equivalent of the web.xml and the user 
> agent. 
> 
> Now, normally the user agent requests a URI, and that's it. It is up to the user agent 
> as to what to do with that URI.

Oh I see.  Yep, makes sense that the 'user agent' be the one who decides
whether or not to chase down links.

> Are you saying that you want to put the configuration as to where pages
> should be placed into the sitemap?

No, that's the user agent's (CLI's) business.

...
> Yup. The primary aim was to reduce the number of page generations. And there was 
> an element of hack here - particularly in the 'hard-wired'ness of the LinkGatherer. 
...

> Or an alternative would be to ask: can you always do your link view
> with a single XSLT stage? If so:
> 
> <map:match pattern="page.html">
>   <map:generate src="page.xml"/>
>   <map:transform src="before-links.xsl"/>
>   <map:transform type="gather-links" src="identify-links.xsl"/>
>   <map:transform src="after-links.xsl"/>
>   <map:serialize/>
> </map:match>
> 
> So there's no hidden link gatherer. And you've got a single xslt to filter, etc. Not 
> specifying src="xxx" skips the xsl stage. The output of this xsl would be xml 
> conforming to a predefined namespace.

Having eliminated the dont-follow-these-links use-case, I don't see a
use-case for XSLT transformations, so it simplifies to 

<map:transform type="gather-links"/>

It certainly fixes the hard-wired'ness problem you mention above (that
'content' != XML before the serializer).


--Jeff

> 
> Regards, Upayavira

Re: Link view goodness (Re: residuals of MIME type bug ?)

Posted by Upayavira <uv...@upaya.co.uk>.
On 28 Jun 2003 at 11:59, Jeff Turner wrote:

> Conceptually, I link the link-view because:
> 
> 1) Links are URIs
> 2) The sitemap is 100% in control of the URI space
>
> implying:
> 
> 3) The sitemap ought to be in control of link URI manipulation, not
> some external cli.xconf file.

Okay. For the CLI, the cli.xconf file is the equivalent of the web.xml and the user 
agent. 

Now, normally the user agent requests a URI, and that's it. It is up to the user agent 
as to what to do with that URI. Are you saying that you want to put the configuration 
as to where pages should be placed into the sitemap? And which URIs should be 
rendered? If so, how would you do this?

Thing is, for me, that means hardwiring the URIs you want to render into your site, 
and doesn't allow for a dynamic regeneration of different parts of the site.

> Now for practicalities:
> 
> I like the fact that the sitemap writer has full control over what is
> considered a link, and what those links look like.  An invisible
> linkgatherer transformer effectively hardcodes:
> 
> <map:serializer name="links"
> src="org.apache.cocoon.serialization.LinkSerializer">
>   <encoding>ISO-8859-1</encoding>
> </map:serializer>
> <map:view name="links" from-position="last">
>   <map:serialize type="links"/>
> </map:view>

Yup. The primary aim was to reduce the number of page generations. And there was 
an element of hack here - particularly in the 'hard-wired'ness of the LinkGatherer. 

It has to be said that the link gatherer uses the same approach as the LinkTranslator, 
which is used by the 'mime-type checking' code. That's where I got the idea.

> There are various points of flexibility that the links view allows:
> 
> Alternative Link schemes
> ------------------------
> 
> If the user's XML doesn't happen to use XLink or @href for linking,
> they would implement an alternative to LinkSerializer.
> 
> For example, imagine we want to render only PDFs.  The last XSLT in
> our pipeline would produce xsl:fo.  The standard LinkSerializer
> doesn't know about fo:external-link elements.  Even if it did, we'd
> want to filter out links to images, since PDFs have images inlined.
> What is an image?  That's up to the sitemap writer.
> 
> Encoding
> --------
> When serializing links in Japanese or something, wouldn't tweaking the
> <encoding> tag be necessary?
> 
> Filtering unwanted links
> ------------------------
> We can filter out unwanted links, with arbitrary precision (eg using
> XPath expressions to determine what to throw out).  In Forrest we use
> <xsl:when test="contains(., 'api/')"> to filter out javadoc links.
> Eventually, 'api/' will be determined at runtime, by querying an input
> module that reads a forrest.xml config file.

I can (and already could) see these benefits. I would like to see a way to meet both of 
our requirements (a link view and single pass generation). Now, caching might be the 
simplest way. Or an alternative would be to ask: can you always do your link view with 
a single XSLT stage? If so:

<map:match pattern="page.html">
  <map:generate src="page.xml"/>
  <map:transform src="before-links.xsl"/>
  <map:transform type="gather-links" src="identify-links.xsl"/>
  <map:transform src="after-links.xsl"/>
  <map:serialize/>
</map:match>

So there's no hidden link gatherer. And you've got a single xslt to filter, etc. Not 
specifying src="xxx" skips the xsl stage. The output of this xsl would be xml 
conforming to a predefined namespace.

> I hope I've convinced you :)  Certainly for simpler needs, hardcoding
> a LinkGathererTransformer is fine, but in general (and I hope where
> Forrest is going) we need the full power of a link view.

I've always been convinced - just don't like the double pass.

Regards, Upayavira

I think there's a place for both, but I'd like to get it 

Link view goodness (Re: residuals of MIME type bug ?)

Posted by Jeff Turner <je...@apache.org>.
On Thu, Jun 26, 2003 at 03:08:09PM +0100, Upayavira wrote:
>  
> > But I like the link-views! ;)  It's one of those design elegancies
> > that makes Cocoon unique.  Adding a don't-crawl-these-links option to
> > the new CLI may solve the same problem, but IMHO it's a hack in
> > comparison.
> 
> What specifically is it that you like about link views? Cos at the
> moment, the alternative way of gathering links is with an invisible
> (automatically inserted) LinkGatherer transformer stage right before
> the serializer.

Conceptually, I link the link-view because:

1) Links are URIs
2) The sitemap is 100% in control of the URI space

implying:

3) The sitemap ought to be in control of link URI manipulation, not
some external cli.xconf file.

Now for practicalities:

I like the fact that the sitemap writer has full control over what is
considered a link, and what those links look like.  An invisible linkgatherer
transformer effectively hardcodes:

<map:serializer name="links" src="org.apache.cocoon.serialization.LinkSerializer">
  <encoding>ISO-8859-1</encoding>
</map:serializer>
<map:view name="links" from-position="last">
  <map:serialize type="links"/>
</map:view>

There are various points of flexibility that the links view allows:

Alternative Link schemes
------------------------

If the user's XML doesn't happen to use XLink or @href for linking, they would
implement an alternative to LinkSerializer.

For example, imagine we want to render only PDFs.  The last XSLT in
our pipeline would produce xsl:fo.  The standard LinkSerializer
doesn't know about fo:external-link elements.  Even if it did, we'd
want to filter out links to images, since PDFs have images inlined.
What is an image?  That's up to the sitemap writer.

Encoding
--------
When serializing links in Japanese or something, wouldn't tweaking the
<encoding> tag be necessary?

Filtering unwanted links
------------------------
We can filter out unwanted links, with arbitrary precision (eg using
XPath expressions to determine what to throw out).  In Forrest we use
<xsl:when test="contains(., 'api/')"> to filter out javadoc links.
Eventually, 'api/' will be determined at runtime, by querying an input
module that reads a forrest.xml config file.


I hope I've convinced you :)  Certainly for simpler needs, hardcoding
a LinkGathererTransformer is fine, but in general (and I hope where
Forrest is going) we need the full power of a link view.


--Jeff


> 
> Upayavira
> 

Re: Still having problems with visibility of subroutine - long.

Posted by Andrew O'Brien <an...@switchonline.com.au>.
On Thu, Nov 08, 2001 at 07:22:15AM -0800, Scott Chapman wrote:
> Andrew,
> Thanks for all your help!

No worries. I've sent this to the list just in case someone else finds
the information useful.

> Can I tie the subroutines into the $req object hash so that they are  
> imported into places that $req is imported and perhaps not have to 
> call them using the $req->sub_name syntax or does $req only 
> work with variables?  In that case can I a pointer to the subroutine 
> in the $req hash?  

You can either do something with the EMBPERL_OBJECT_HANDLER_CLASS as I
mentioned before or you can put subroutines in $req. Either way you
will have to call the subroutine like $req->subname ... if you want to
call subroutines directly then put them in a module and have

[! use MyModule; !]

at the top of every file. Try not to think of the whole embperl
Execution path as being one big perl program - in a lot of ways, each
Execute is its own seperate "program" so you have to import subroutines
every time.

Some examples (I'm procrastinating on my own project, can't you tell?
:)

---------------- Using a module: ----------------

The file /path/to/MyModule.pm:

package MyModule; # give it a name
use strict;
use otherstuffyoumightneed;

use vars qw ($VERSION);
$VERSION = 0.01;

sub log_stuff {
  my ($self, $arg1, $arg2) = @_;
  ...
}

A standard blah.epl:

[! use MyModule; !]

[- log_stuff("Hello", "world"); -]


-------------- Using $req: -------------------

base.epl is the same

constants.epl:

[- $req = shift;
 $req->{somename} = 'somevalue';
 ...
 $req->{logstuff} = sub {
   my ($self, $arg1, $arg2) = @_; 
   ... 
 }
-]

Usage in a file.epl:

[- $req = shift;
  $req->{logstuff}("Hello", "world");
-]

------------ Using EMBPERL_OBJECT_HANDLER_CLASS -----------

In your httpd.conf:

# somewhere before you "use" EmbperlObject
# This assumes that this module can be found when Embperl starts up
PerlSetEnv EMBPERL_OBJECT_HANDLER_CLASS MyProject::MyReq
...

The file /path/to/MyProject/MyReq.pm:

package MyProject/MyReq;

use HTML::Embperl;
BEGIN {
  if (exists $ENV{MOD_PERL}) { use Apache; } # if you need this anywhere
};

use strict;
use vars qw($VERSION @ISA $escmode %fdat *OUT *LOG);

@ISA = qw(HTML::Embperl::Req);
BEGIN {
  ($VERSION) = '$Revision: 1.2 $' =~ /Revision: ([\d.]+)/; #'emacs;
};

# for convenience  
*escmode = \$HTML::Embperl::escmode;
*fdat = \%HTML::Embperl::fdat;
tie *OUT, 'HTML::Embperl::Out';
tie *LOG, 'HTML::Embperl::Log';

sub logstuff {
  my $self = shift;
  ...
}

Usage in a file.epl:

[- $req = shift;
  $req->logstuff("Hello", "world");
-]

---------------------------------------------------------------

> This object-oriented stuff is still rather of a black box to me.  I've 
> got Manning's Perl books as well as O'Reilly's but time to read 
> them is scarce in the middle of this project.

Good luck with it all ...

-- 
 Andrew O'Brien                                                               
 Product Engineer                        email: andrewo@switchonline.com.au.
 Switch Online Group Pty Limited         phone: +61 2 9299 1133             
 ABN 89 092 286 327                      fax: +61 2 9299 1134             

---------------------------------------------------------------------
To unsubscribe, e-mail: embperl-unsubscribe@perl.apache.org
For additional commands, e-mail: embperl-help@perl.apache.org


Re: Release?

Posted by Christian Geisert <Ch...@isu-gmbh.de>.
Christian Geisert wrote:
> 
> Karen Lease wrote:
> >
> > I'll be happy to test it on my Linux/MacPPC. That's about as far away
> > from a windows machine as we can get...
> 
> I can test it on a AS/400. This is really far away from windows ;-)

Ok I did a quick test with docs/examples/fo and it looks good except
images.fo which did not work as expected (no awt) and list.fo which produced
a FATAL ERROR (maybe I can have a look at it later...)

Tested with OS/400 V4R4 and  JDK1.1.7

Christian

Re: Release?

Posted by Christian Geisert <Ch...@isu-gmbh.de>.
Karen Lease wrote:
> 
> I'll be happy to test it on my Linux/MacPPC. That's about as far away
> from a windows machine as we can get...

I can test it on a AS/400. This is really far away from windows ;-)

> Karen

Christian

Re: Release?

Posted by Karen Lease <kl...@club-internet.fr>.
I'll be happy to test it on my Linux/MacPPC. That's about as far away
from a windows machine as we can get...

Karen

Fotis Jannidis wrote:
> 
> From:                   Arved Sandstrom <Ar...@chebucto.ns.ca>
> 
> > I lost track of who volunteered to actually do this release. Was it Fotis or
> > Steven?
> 
> Nobody I think. At the end ot next week (Thursday) I can prepare it,
> but need someone who checks whether the release file builds
> without any problems on a non Windows machine. If Steve or
> another one can step in I would be lucky ;-)
> 
> Fotis


Re: Volunteers needed: Reboot of the XML 'PMC'.

Posted by Arved Sandstrom <Ar...@chebucto.ns.ca>.
On Sat, 03 Mar 2001, Arnaud Le Hors wrote:
> Arved Sandstrom wrote:
> > 
> > But XML Apache has another current
> > goal - "to provide feedback to standards bodies (such as IETF and W3C) from
> > an implementation perspective" - that covers this base, too; we are
> > responsible for generating quality feedback to the spec writers. Why?
> > Because we said so, ourselves. And I'm not sure we do that so well, either.
> 
> I couldn't tell if it's true for all the XML projects but my guess is
> that the situation is actually better than you might think. As it
> stands, several committers of Xerces and Xalan are members of the
> various standard committees that actually produce the specs that govern
> these projects. I, for one, regularly forward Xerces users feedback to
> the W3C Working Groups I'm in (XML Core and DOM). I'm sure this is true
> for other committers involved in similar groups.
> This said, it would be nice to identify these people and figure how much
> coverage we actually have.

FOP committers have personal contacts with several members of the W3C XSL WG
(FO group), and we touch base with other implementors to help define something
of a common front. And I know that Max Froumentin regularly monitors fop-dev.
Apart from sending direct requests for clarification and comments to XSL
editors, which is infrequent at the moment, we are aware that several XSL (FO)
spec types keep an eye on certain mailing lists (Mulberry XSL, and now W3C XSL
FO), and these are useful places for raising general concerns. So we are not
doing too badly.

I don't know how much input and fedback James Tauber managed to provide to the
XSL WG back in 1999 or early 2000, say, but since then I'd guess that
implementor feedback has played a minimal part in guiding XSL 1.0 (mostly
because it was too late). The real challenge will be in seeing what we can do
better with XSL 2.0.

I'm personally hoping that the WG just declares XSL 1.0 a Recommendation and
moves on. There is enough of an implementation base and enough community
comprehension of the spec to declare victory, IMO. Right now there is something
of a delay due to unfortunate (and unrealistic) exit conditions built into the
Candidate Recommendation, which has to do with testing. I don't really want to
get into the details, but suffice it to say that some things weren't well
handled. I'm waiting to hear back as to what will happen with this. Be that as
it may, I think that FOP as a whole will be quite proactive in widening our
engagement with W3C XSL WG members, in future.

Regards,
Arved Sandstrom



Re: Volunteers needed: Reboot of the XML 'PMC'.

Posted by Arnaud Le Hors <le...@us.ibm.com>.
One thing I'd like to see happening with this reboot is a clear rule on
who can be on the PMC list. As it stands there are people on the PMC
list who are not part of the PMC per say, and I'm not aware of why
they've been granted this privilege while other people (in particular
committers) have never been offered to be on the list themselves.
I would suggest as a rule that all committers are offered to subscribe.
-- 
Arnaud  Le Hors - IBM Cupertino, XML Strategy Group


Re: Volunteers needed: Reboot of the XML 'PMC'.

Posted by Arnaud Le Hors <le...@us.ibm.com>.
Arved Sandstrom wrote:
> 
> But XML Apache has another current
> goal - "to provide feedback to standards bodies (such as IETF and W3C) from
> an implementation perspective" - that covers this base, too; we are
> responsible for generating quality feedback to the spec writers. Why?
> Because we said so, ourselves. And I'm not sure we do that so well, either.

I couldn't tell if it's true for all the XML projects but my guess is
that the situation is actually better than you might think. As it
stands, several committers of Xerces and Xalan are members of the
various standard committees that actually produce the specs that govern
these projects. I, for one, regularly forward Xerces users feedback to
the W3C Working Groups I'm in (XML Core and DOM). I'm sure this is true
for other committers involved in similar groups.
This said, it would be nice to identify these people and figure how much
coverage we actually have.
-- 
Arnaud  Le Hors - IBM Cupertino, XML Strategy Group



Re: Volunteers needed: Reboot of the XML 'PMC'.

Posted by Arved Sandstrom <Ar...@chebucto.ns.ca>.
At 02:06 PM 3/1/01 -0700, Kimbro Staken wrote:
[SNIP]
>To achieve this I suggest that you keep your PMC small, 3-4 people vs.
>the 3-9 originally called for. Task the PMC with high level strategic
>and administrative tasks and the authority to delegate operational
>issues to temp groups. By delegating operational issues you remove the
>need to have members from all sub projects on the PMC. You need to think
>about the situation as you add additional projects. If all current
>projects have representation all future projects will also expect
>representation with the likely end result that your PMC will be so large
>that it will once again be inneffective. Most of your business is done
>out in the open anyway so this shouldn't really be a problem as everyone
>will see what is going on and will still have input. You should also
>accept that your PMC is serving two masters. First the development
>community within the Apache XML project and second the target market at
>which you are aiming the technology under development. Most of the
>discussion I've seen so far has focused on the first master and ignores
>the second. I think you need to keep both in mind to continue growth of
>the organization.

Good points all (the rest of your post included). Apache prides itself on 
being not just another software sweatshop, open-source or no open-source, 
but in fostering a sense of community, and this is a very worthy goal. But 
the main goal always has to be the end-user...after all, what is the point 
of developing software otherwise?

The current goals for XML Apache includes: "to provide commercial-quality 
standards-based XML solutions that are developed in an open and cooperative 
fashion". Assuming that this remains a goal, I think we will all certainly 
acknowledge that it is entirely impossible to do the above without 
soliciting user requirements, i.e. without being responsive to the user 
community. If you look at the fop-dev mailing list, which also acts as a 
user list, I think you will see that FOP, at least, tries to listen. I'm not 
saying that others don't - I simply don't know. But it would be nice to see 
this being stressed more strongly at the overall project level.

A number of Apache XML projects have their requirements nailed down quite a 
bit since they are trying to implement a spec. In such cases we have to hope 
that the spec writers did the due diligence and got input from users (in 
some cases I suspect that they didn't). But XML Apache has another current 
goal - "to provide feedback to standards bodies (such as IETF and W3C) from 
an implementation perspective" - that covers this base, too; we are 
responsible for generating quality feedback to the spec writers. Why? 
Because we said so, ourselves. And I'm not sure we do that so well, either.

I like the direction this discussion is headed. You're absolutely right, 
IMO. To quote your above: "...your PMC is serving two masters. First the 
development community within the Apache XML project and second the target 
market..." Yes, I agree. And there is also the legalistic third aspect - 
being answerable to the board. I tend to agree that perhaps a smallish PMC 
with high-level tasks and an oversight mandate (QA, effectively: "did you do 
what you said that you were going to do? No? Why not?"), and operational 
groups on a contingency basis, as has been proposed here, sounds like a 
workable solution to address all 3 concerns.

Regards,
Arved Sandstrom

Fairly Senior Software Type
e-plicity (http://www.e-plicity.com)
Wireless * B2B * J2EE * XML --- Halifax, Nova Scotia


Re: Volunteers needed: Reboot of the XML 'PMC'.

Posted by Kimbro Staken <ks...@dbxmlgroup.com>.
I think you're on the right track with this. As I've been observing you
clearly have two different levels of problems to address. First the
management problems that were outlined in the orignal email from Dirk
and second the operational problems as raised by members of the various
projects. It seems you have a conflict between the roles of the PMC and
the type of people who are interested and capable of filling those
roles. Maybe this is why in the past the PMC has been ineffective.
Having the PMC tasked with both strategic management and tactical
operations is a lot to ask and those are roles typically fulfilled by
different types of people. I'm going to use a few naughty words here but
I think you need to treat the organization as a business. It isn't a
traditional business as your goal is not to make money but it is a
business just the same as any non-profit is a business. To this end you
need to insure that your management role is doing many of the things
that managers in traditional businesses are doing. This includes setting
direction, resolving disputes and yes even marketing(brrr. I can feel
the chill) :-). Ultimately you need to decide why does the PMC exist at
all. Does it exist to guide the Apache XML organization on a path of
growth and prosperity or does it exist to solve cross project technical
problems? Having it attempt to directly tackle both will make it
ineffective.

In the case of "Temporary Working Groups" the role of the PMC might be
to guide their formation, provide oversight and insure momentum is
maintained during the effort and then disolve the effort when the
working group has determined the problem is solved.

Right now you're placing people who are primarilly developers into a
role of stategic management. This will work for small organizations but
as you grow it will become more of a problem. It seems the ASF overall
has succeeded in making this transition, now might be the time for the
XML group to do the same.

To achieve this I suggest that you keep your PMC small, 3-4 people vs.
the 3-9 originally called for. Task the PMC with high level strategic
and administrative tasks and the authority to delegate operational
issues to temp groups. By delegating operational issues you remove the
need to have members from all sub projects on the PMC. You need to think
about the situation as you add additional projects. If all current
projects have representation all future projects will also expect
representation with the likely end result that your PMC will be so large
that it will once again be inneffective. Most of your business is done
out in the open anyway so this shouldn't really be a problem as everyone
will see what is going on and will still have input. You should also
accept that your PMC is serving two masters. First the development
community within the Apache XML project and second the target market at
which you are aiming the technology under development. Most of the
discussion I've seen so far has focused on the first master and ignores
the second. I think you need to keep both in mind to continue growth of
the organization.

This is just my 2 pennies. Keep in mind, I'm looking at the Apache XML
project as a consumer of the technology that you are developing not as a
person developing that technology. From that perspective I'm seeing
struggling on strategic direction and management, which is why I
volunteered to help out. Because of my work on dbXML my programming
mental bandwidth is used up so I can't contribute on a code level but I
think I can contribute on other levels. Especially if you guys really
want to add an XML database. If I can help I'd love to do so but if you
consider me unqualified because of lack of code contributions it's all
good to me too. :-) BTW, just so you know I really do work on projects
you can take a look at the list archives for dbXML
http://www.geocrawler.com/archives/3/4793/2001/ and XML:DB
http://archive.xmldb.org/xmldb/threads.html.

James Melton wrote:
> 
> It seems from reading the existing bylaws that there is nothing to
> prevent the PMC from doing as you suggested. Rather than actually
> changing the bylaws, perhaps another category under "Roles and
> Responsibilities" (http://xml.apache.org/roles.html) could include
> "Temporary Working Groups":
> 
> "Temporary Working Groups are created by the PMC to resolve issues that
> require more attention than the PMC can provide directly. Typically
> these are issues that affect multiple subprojects. Members are drawn
> from the Open Source community and are appointed by the PMC. The Working
> Group acts in place of the PMC to resolve an issue, and upon resolution
> the group dissolves."
> 
> Jim.
> 
> Fotis Jannidis wrote:
> >
> > From:                   Dirk-Willem van Gulik <di...@covalent.net>
> >
> > > Care to write it down in such a way as it would appear ? I.e. see
> > > the web site for what is there now (xml.apache.org/misson.html
> > > et.al.) and see what you would come up with - possibly with > '
> > > footnotes' where needed to explain what is really meant.
> >
> > What I had in mind would be an addition on the page
> > http://xml.apache.org/management.html section "Roles".
> >
> > >>>
> > The PMC is responsible for the strategic direction and success of
> > the xml.apache.org Project. This governing body is expected
> > to ensure the project's welfare and guide its overall direction. The
> > PMC may not necessarily participate in the day-to-day coding but
> > is involved in the overall development plans, the alleviation of any
> > bottlenecks, the resolution of conflicts, and the overall technical
> > success of the project.
> > <<<
> >
> > NEW:
> > In order to handle problems concerning more than one sub project
> > the PMC can create working groups with volunteers from all
> > Apache projects. [Footnote / Addition:  These working
> > groups are task oriented and dissolved as soon as the
> > problem is solved or some agreement has been found.]
> >
> > Fotis
> >
> > ---------------------------------------------------------------------
> > In case of troubles, e-mail:     webmaster@xml.apache.org
> > To unsubscribe, e-mail:          general-unsubscribe@xml.apache.org
> > For additional commands, e-mail: general-help@xml.apache.org
> 
> --
> 
> ____________________________________________________________
> James Melton                 CyLogix
> 609.750.5190                 609.750.5100
> james.melton@cylogix.com     www.cylogix.com
> 
> ---------------------------------------------------------------------
> In case of troubles, e-mail:     webmaster@xml.apache.org
> To unsubscribe, e-mail:          general-unsubscribe@xml.apache.org
> For additional commands, e-mail: general-help@xml.apache.org

-- 
Kimbro Staken
Chief Technology Officer
dbXML Group L.L.C
http://www.dbxmlgroup.com

Re: Volunteers needed: Reboot of the XML 'PMC'.

Posted by Dirk-Willem van Gulik <di...@covalent.net>.

On Thu, 1 Mar 2001, James Melton wrote:

> It seems from reading the existing bylaws that there is nothing to
> prevent the PMC from doing as you suggested. Rather than actually
> changing the bylaws, perhaps another category under "Roles and
> Responsibilities" (http://xml.apache.org/roles.html) could include
> "Temporary Working Groups":

Absolutely correct.
 
> "Temporary Working Groups are created by the PMC to resolve issues that
> require more attention than the PMC can provide directly. Typically
> these are issues that affect multiple subprojects. Members are drawn
> from the Open Source community and are appointed by the PMC. The Working
> Group acts in place of the PMC to resolve an issue, and upon resolution
> the group dissolves."

Nice.

Dw


Re: Volunteers needed: Reboot of the XML 'PMC'.

Posted by James Melton <ja...@cylogix.com>.
It seems from reading the existing bylaws that there is nothing to
prevent the PMC from doing as you suggested. Rather than actually
changing the bylaws, perhaps another category under "Roles and
Responsibilities" (http://xml.apache.org/roles.html) could include
"Temporary Working Groups":

"Temporary Working Groups are created by the PMC to resolve issues that
require more attention than the PMC can provide directly. Typically
these are issues that affect multiple subprojects. Members are drawn
from the Open Source community and are appointed by the PMC. The Working
Group acts in place of the PMC to resolve an issue, and upon resolution
the group dissolves."

Jim.

Fotis Jannidis wrote:
> 
> From:                   Dirk-Willem van Gulik <di...@covalent.net>
> 
> > Care to write it down in such a way as it would appear ? I.e. see
> > the web site for what is there now (xml.apache.org/misson.html
> > et.al.) and see what you would come up with - possibly with > '
> > footnotes' where needed to explain what is really meant.
> 
> What I had in mind would be an addition on the page
> http://xml.apache.org/management.html section "Roles".
> 
> >>>
> The PMC is responsible for the strategic direction and success of
> the xml.apache.org Project. This governing body is expected
> to ensure the project's welfare and guide its overall direction. The
> PMC may not necessarily participate in the day-to-day coding but
> is involved in the overall development plans, the alleviation of any
> bottlenecks, the resolution of conflicts, and the overall technical
> success of the project.
> <<<
> 
> NEW:
> In order to handle problems concerning more than one sub project
> the PMC can create working groups with volunteers from all
> Apache projects. [Footnote / Addition:  These working
> groups are task oriented and dissolved as soon as the
> problem is solved or some agreement has been found.]
> 
> Fotis
> 
> ---------------------------------------------------------------------
> In case of troubles, e-mail:     webmaster@xml.apache.org
> To unsubscribe, e-mail:          general-unsubscribe@xml.apache.org
> For additional commands, e-mail: general-help@xml.apache.org

-- 

____________________________________________________________
James Melton                 CyLogix
609.750.5190                 609.750.5100
james.melton@cylogix.com     www.cylogix.com

Re: Using custom tags and struts together

Posted by Murray Collingwood <mu...@focus-computing.com>.
Thanks Wendy

I think the idea of using a security bean is what I need to do.

Kind regards
mc

On 20 Aug 2005 at 9:50, Wendy Smoak wrote:

> > I hope you are getting the picture.  This is why I was trying to use a
> > custom tag that could still interact with my model, call business methods 
> > to make security
> > decisions and vary the generated link accordingly.
> >
> > And finally the question: How should I go about writing the "Update" link
> > now that we all understand the problem?
> 
> You could put all that logic somewhere else, perhaps in a bean with an 
> 'isUpdateAllowed' method:
>    <c:if test="${security.updateAllowed}>  <html:link ...>  </c:if>
> 
> Since you're already okay with a custom tag, what about extending the Struts 
> link tag to do what you need?
> 
> I also wonder if you really need <html:link> for this-- you're already 
> hard-coding the action name.  If there's nothing dynamic about the link 
> other than that 'entry[ix]' part, then can you just write out the <a 
> href="..."> from your custom tag?
> 
> I've had success with request.isUserInRole-- *without* getting into custom 
> Realms and CMA-- just add a Filter, wrap the request and override 
> 'isUserInRole'.  It sounds like you have some "levels" that could be roles. 
> (I'm not clear on the runtime checking of fields you mentioned, but you 
> should have access to the request and session from isUserInRole.)  Struts 
> has <logic:present role="..."> (and there's probably a JSTL equivalent).
> 
> -- 
> Wendy Smoak 
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@struts.apache.org
> For additional commands, e-mail: user-help@struts.apache.org
> 
> 
> -- 
> No virus found in this incoming message.
> Checked by AVG Anti-Virus.
> Version: 7.0.338 / Virus Database: 267.10.13/78 - Release Date: 19/08/2005
> 



FOCUS Computing
Mob: 0415 24 26 24
murray@focus-computing.com
http://www.focus-computing.com



-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.338 / Virus Database: 267.10.13/78 - Release Date: 19/08/2005


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@struts.apache.org
For additional commands, e-mail: user-help@struts.apache.org


Re: Using custom tags and struts together

Posted by Wendy Smoak <ja...@wendysmoak.com>.
> I hope you are getting the picture.  This is why I was trying to use a
> custom tag that could still interact with my model, call business methods 
> to make security
> decisions and vary the generated link accordingly.
>
> And finally the question: How should I go about writing the "Update" link
> now that we all understand the problem?

You could put all that logic somewhere else, perhaps in a bean with an 
'isUpdateAllowed' method:
   <c:if test="${security.updateAllowed}>  <html:link ...>  </c:if>

Since you're already okay with a custom tag, what about extending the Struts 
link tag to do what you need?

I also wonder if you really need <html:link> for this-- you're already 
hard-coding the action name.  If there's nothing dynamic about the link 
other than that 'entry[ix]' part, then can you just write out the <a 
href="..."> from your custom tag?

I've had success with request.isUserInRole-- *without* getting into custom 
Realms and CMA-- just add a Filter, wrap the request and override 
'isUserInRole'.  It sounds like you have some "levels" that could be roles. 
(I'm not clear on the runtime checking of fields you mentioned, but you 
should have access to the request and session from isUserInRole.)  Struts 
has <logic:present role="..."> (and there's probably a JSTL equivalent).

-- 
Wendy Smoak 



---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@struts.apache.org
For additional commands, e-mail: user-help@struts.apache.org


Re: OT: email exposed to spammers

Posted by Ryan Schmidt <su...@ryandesign.com>.
On Jul 24, 2006, at 18:48, Oliver Betz wrote:

> Kamal wrote:
>
> [mail address shows up in public archives]
>
>> I always consider this in non-advanced-software-mailing-lists (). I
>
> Even if "the" list archives (not really the list itself) obuscate the
> adresses, there might be another archive, maybe an "inofficial"
> archive not doing so.
>
> AFAIK, there are at least two archives of this mailing list.
>
> So even if it seems that the addresses are obfuscated in "the"
> archive, don't rely that it doesn't show up somwhere else.

The unofficial list archives which everybody uses at svn.haxx.se do  
obfuscate email addresses. For example, view the source of this page:

http://svn.haxx.se/users/archive-2006-07/0955.shtml

Unfortunately, the official list archives provided by tigris.org /  
collabnet which nobody uses because they are so awful do not  
obfuscate email addresses. See the source of this:

http://subversion.tigris.org/servlets/ReadMsg?list=users&msgNo=52123

This is bug 3, reported in September 2003, which they refuse to fix,  
on ludicrous and incorrect grounds:

http://www.tigris.org/issues/show_bug.cgi?id=3

The grounds are ludicrous and incorrect because even the simplest  
form of obfuscation (for example, changing the @ sign to " at ") has  
been effective in preventing me from getting even a single spam email  
on the other mailing lists I'm on where they do that.

I receive an average of one spam message a day to my Subversion  
mailing list address thanks to this asininity, and have had to  
generate a new email address every three months or so to try to keep  
this from spiraling out of control. It incenses me that the list  
admins place such little value on the time of their contributors. But  
at the moment the only options appear to be to stay on the list and  
accept the spam, or to leave the list. For the moment, I'm still  
choosing the former.


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Merge Help

Posted by Ben Collins-Sussman <su...@collab.net>.
On Jun 4, 2005, at 7:03 AM, trlists@clayst.com wrote:

>
> Re the earlier discussion on timestamps, what I really want here is  
> for
> the files in the live/ wc to have the same timestamps as they had in
> the dev/ wc -- i.e. the last time the file was actually modified
> (putting aside any mods to resolve conflicts).  So unless I'm using
> merge incorrectly, I think this is a case where you would want  
> merge to
> preserve the timestamps, if they were being preserved at all.
>

I understand your point, it makes sense.  In other words, if  
subversion were versioning timestamps/owner/permission metadata, then  
that stuff would be preserved when porting changes, just the way file  
contents are preserved.

Unfortunately, subversion doesn't version filesystem metadata.  The  
current design is such that when you import files into the subversion  
repository, you're moving them to a new filesystem (the repository),  
and it then generates its *own* filesystem metadata:  last-commit- 
time, last-author, last-changed-rev, filesize.  This is all the stuff  
that 'svn ls -v URL' shows.  All of the original metadata is lost.

The working copy is just a disposable shadow of the 'real'  
filesystem, so you need to switch paradigms here.  Instead of using  
OS timestamps to copy stuff between working copies, use the *real*  
filesystem to do the syncing -- that is, the repository.  'svn  
commit' when you need to save your work, 'svn update' on another box  
to get the changes in progress.  The svn repository is capable of  
doing everything that your timestamp-syncing script is already  
doing;  you just need to embrace it as the center of your workflow now.


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Attention list owners

Posted by "C. Michael Pilato" <cm...@collab.net>.
"Jan Hendrik" <ja...@bigfoot.com> writes:

> Could the owners of the list remove me from this, that is from all 
> lists hosted on tigris.org, please? 

What address is subscribed?  I don't see any obvious matches in the
list of subscribers on dev@s.t.o or user@s.t.o, and Fitz claims he has
to moderate your mails through.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Attention list owners

Posted by Brian Behlendorf <br...@collab.net>.
Note that if the address the person subscribed as has changed over time to 
a different one, with forwarding from the old one, it may be non-obvious 
how to unsub.  The address that is actually subscribed is embedded in the 
headers of every message, in the Return-Path, for example:

Return-Path: <de...@subversion.tigris.org>

shows me that brian@collab.net is the address used to reach me.  Jan, 
please use that to figure out which address is subscribed to each list.

 	Brian


On Wed, 5 Jan 2005, Max Bowsher wrote:
> The correct way to unsubscribe is to send email to:
>
> dev-unsubscribe@subversion.tigris.org
> users-unsubscribe@subversion.tigris.org
> dev-unsubscribe@tortoisesvn.tigris.org
>
> from your subscribed address.
>
> This is described on the project webpages.
>
> If, for some reason this is not working for you, you should provide copies of 
> the emails telling you of the failure to unsubscribe, so we may understand 
> why it is not working for you.
>
> Max.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
> For additional commands, e-mail: dev-help@subversion.tigris.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Attention list owners

Posted by Brian Behlendorf <br...@collab.net>.
Note that if the address the person subscribed as has changed over time to 
a different one, with forwarding from the old one, it may be non-obvious 
how to unsub.  The address that is actually subscribed is embedded in the 
headers of every message, in the Return-Path, for example:

Return-Path: <de...@subversion.tigris.org>

shows me that brian@collab.net is the address used to reach me.  Jan, 
please use that to figure out which address is subscribed to each list.

 	Brian


On Wed, 5 Jan 2005, Max Bowsher wrote:
> The correct way to unsubscribe is to send email to:
>
> dev-unsubscribe@subversion.tigris.org
> users-unsubscribe@subversion.tigris.org
> dev-unsubscribe@tortoisesvn.tigris.org
>
> from your subscribed address.
>
> This is described on the project webpages.
>
> If, for some reason this is not working for you, you should provide copies of 
> the emails telling you of the failure to unsubscribe, so we may understand 
> why it is not working for you.
>
> Max.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
> For additional commands, e-mail: dev-help@subversion.tigris.org
>

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Attention list owners

Posted by Jan Hendrik <ja...@bigfoot.com>.
Concerning Re:  Attention list owners
Max Bowsher wrote on 5 Jan 2005, 18:41, at least in part:

> Jan Hendrik wrote:
> > the websites I could not get any further, probably as there was no
> > registration needed back when I subscribed.  But to register now to
> > become unlisted is IMNSHO spammers' behaviour.
> 
> Registration is *NOT* required to unsubscribe, so I would _appreciate_ you 
> not describing us as spammers.

Max,
if you say then my _probably_ option is obsolete.  Sorry that I hurt 
you, but I won't start a discussion now on the many definitions of 
spam (which generally I define relatively restricted as the notorious 
Nigeria, porn, software ad, and phishing stuff) and what is 
spammers' behaviour and when once legitimate mail can turn into 
something like spam in my or anyone's eyes.

> The correct way to unsubscribe is to send email to:
> 
> dev-unsubscribe@subversion.tigris.org
> users-unsubscribe@subversion.tigris.org
> dev-unsubscribe@tortoisesvn.tigris.org

For the records I sent empty mails by doubleclicking on yours 
provided above, doublechecking that the identity is set to this 
bigfoot address and will add the reply of the list server - if any - to 
my later posting to C. Michael Pilato.

Best regards,

Jan Hendrik

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Attention list owners

Posted by Max Bowsher <ma...@ukf.net>.
Jan Hendrik wrote:
> Could the owners of the list remove me from this, that is from all
> lists hosted on tigris.org, please?
>
> Last March I tried to do so, just as futile as later on in summer
> again when I anticipated getting into PHP, Perl, Python, MySQL for
> a major relaunch of websites here, leaving no time to scan 150-200
> list mails per day.  However, neither sending command mails to the
> list server as described in the help received on subscription nor use
> of the unsubscribe link in the head of list mails had any effect.  On
> the websites I could not get any further, probably as there was no
> registration needed back when I subscribed.  But to register now to
> become unlisted is IMNSHO spammers' behaviour.

Registration is *NOT* required to unsubscribe, so I would _appreciate_ you 
not describing us as spammers.

The correct way to unsubscribe is to send email to:

dev-unsubscribe@subversion.tigris.org
users-unsubscribe@subversion.tigris.org
dev-unsubscribe@tortoisesvn.tigris.org

from your subscribed address.

This is described on the project webpages.

If, for some reason this is not working for you, you should provide copies 
of the emails telling you of the failure to unsubscribe, so we may 
understand why it is not working for you.

Max.


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Attention list owners

Posted by Max Bowsher <ma...@ukf.net>.
Jan Hendrik wrote:
> Could the owners of the list remove me from this, that is from all
> lists hosted on tigris.org, please?
>
> Last March I tried to do so, just as futile as later on in summer
> again when I anticipated getting into PHP, Perl, Python, MySQL for
> a major relaunch of websites here, leaving no time to scan 150-200
> list mails per day.  However, neither sending command mails to the
> list server as described in the help received on subscription nor use
> of the unsubscribe link in the head of list mails had any effect.  On
> the websites I could not get any further, probably as there was no
> registration needed back when I subscribed.  But to register now to
> become unlisted is IMNSHO spammers' behaviour.

Registration is *NOT* required to unsubscribe, so I would _appreciate_ you 
not describing us as spammers.

The correct way to unsubscribe is to send email to:

dev-unsubscribe@subversion.tigris.org
users-unsubscribe@subversion.tigris.org
dev-unsubscribe@tortoisesvn.tigris.org

from your subscribed address.

This is described on the project webpages.

If, for some reason this is not working for you, you should provide copies 
of the emails telling you of the failure to unsubscribe, so we may 
understand why it is not working for you.

Max.


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: [PATCH][MERGE-TRACKING] New function to create a mergeinfo hash out of a single path and single merge-range

Posted by Daniel Rall <dl...@collab.net>.
On Tue, 24 Oct 2006, Madan S. wrote:

> On Mon, 09 Oct 2006 15:30:59 +0530, Madan U Sreenivasan <ma...@collab.net>  
> wrote:
> 
> >Hi,
> >
> >    As per http://svn.haxx.se/dev/archive-2006-09/0975.shtml, please find
> >below an internal function that makes a mergeinfo hash out of a single
> >path and a single merge-range list.
> >
> >    I have gone the svn_sort__hash() way of adding a '__' function in a
> >header file. The other option would have been to create a libsvn_subr
> >local header file (ala utf_impl.h), but I feel this is an overkill for a
> >single function.
> 
> I haven't heard any serious reservations against this patch. But this  
> hasnt been committed yet. So, this mail is just a reminder. I need this to  
> complete my repos_to_repos_copy() stuff on the merge_tracking branch

Before committing a change like this, I'd like to see a patch putting
it to use everywhere possible to make sure it's worth adding to our
internal API.

Re: Source dist build

Posted by Jordi Salvat i Alabart <js...@atg.com>.

Jeremy Arnold wrote:
> Mike,
>    Jordi did all the work -- I just spent 5 minutes having Eclipse look 
> for where methods were called from.

Yeah, but I was pretty lost.

>    Getting a dump of CVS sounds like a reasonable plan for this 
> release.
+1

-- 
Salut,

Jordi.


Re: Source dist build

Posted by Jordi Salvat i Alabart <js...@atg.com>.

Jeremy Arnold wrote:
> Mike,
>    Jordi did all the work -- I just spent 5 minutes having Eclipse look 
> for where methods were called from.

Yeah, but I was pretty lost.

>    Getting a dump of CVS sounds like a reasonable plan for this 
> release.
+1

-- 
Salut,

Jordi.


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: Source dist build

Posted by Jeremy Arnold <je...@bigfoot.com>.
Mike,
    Jordi did all the work -- I just spent 5 minutes having Eclipse look 
for where methods were called from.

    Getting a dump of CVS sounds like a reasonable plan for this 
release.  But this might be something to discuss for the next release.  
Moving to Maven might make it a moot point anyway, since it will just 
find the libraries it needs (whether on the local system or on the 
network).  If we don't move to Maven, we'll have to talk about whether 
it is better to include all of the libraries or to just write a 
BUILDING.TXT file which lists the versions of the libraries we've tested 
with and their download locations.

Jeremy

mstover1@apache.org wrote:

>Thanks to both Jeremy and Jordi for finding and fixing these problems.  I also 
>fixed the lack of jdom.jar in the release.  The src files are out there, however, 
>wouldn't it be preferable for the src dist to be a mirror of the files as they 
>appear in CVS?  As it is now, someone can download the src, and they're 
>really only half done in terms of getting everything they need to compile 
>JMeter.  Plus the fact that the versions of libs they choose to download might 
>differ from the versions that 1.9 uses, that seems like a potential problem.
>
>I'm thinking src_dist should simply tar up all the cvs files as is and be done.  
>What do you all think?
>
>-Mike
>  
>



---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: Source dist build

Posted by Jeremy Arnold <je...@bigfoot.com>.
Hello,
    In principle, I agree with everything Sebastian said...at least for 
the src versions.  (I still have to think about his comments on the 
binary versions.)  Mike is correct about JMeter having a different 
target audience than most Jakarta projects.  However, I think that the 
assumptions change a bit for source distributions -- if you download the 
source distribution you fall into the "developer" category rather than 
just a "user".  Of course, if we split the binary distribution as 
Sebastian suggested, the "external jars" part could be used with the 
source distribution too, which might simplify things a bit.

    In spite of this, I'll still give my thumbs up to Mike for including 
the external jars in the source distribution for this release.  This 
matches (I think) how it's been done in past JMeter releases, and it is 
also the simplest option.  Since we're already a few days behind the 
binary 1.9 release, simple is good.  Let's wait for the next release 
before we make big changes to how we distribute everything.

Jeremy


Sebastian Bazley wrote:

>I agree that src_dist should just include the source files, and not any external jars, and should ideally only include the xml docs,
>not the generated html ones.
>
>Building JMeter would obviously require the run-time jars, but these would be needed for the binary distribution as well - I don't
>see any point in including them in both source and binary distributions.
>
>It may be worth splitting the binary distribution into two:
>- the external jars
>- the rest of the existing binary distribution. [Perhaps remove the HTML files from docs, and keep just the printable_docs
>versions?]
>
>As to including Ant in the distribution, I agree with Jeremy that it should not be included.
>It seems to me that developers are likely to have this installed anyway, and Ant is surely now stable enough that any recent version
>will build JMeter.
>
>As far as I can tell, the only other external dependency is Anakia (Velocity), which is needed to create the documentation.
>This could perhaps be included in the source distribution, but I think it would be better just to leave up to developers to download
>it separately. As with Ant, they might already have it. The build.xml file can be updated to allow anakia to be picked up from lib/
>and/or ../jakarta-site2/.
>
>Which reminds me: I would like to propose some enhancements to build.xml:
>
>- allow the source/binary distributions to be created without rebuilding everything. This can be done by introducing two new targets
>which just do the tar/[g]zip etc. These targets could be marked "internal", i.e. no description.
>E.g. Make "dist" depend on "dist_tar", and move the tar/gzip/zip to dist_tar. Similarly for src_dist.
>[The original dist and src_dist targets would still do the same work, but refactored.]
>
>- similarly, add a target (test_only ?) that can be used to test JMeter without needing to do a full build.
>At present, this means one cannot test a binary JMeter distribution.
>[One would need to include build.xml and bin/testfiles in binary distributions.]
>
>- be able to use build.xml without needing build.sh or build.bat (e.g. use "ant [target]" or Eclipse)
>As it stands, build.bat does not agree with build.sh on where to look for libraries.
>Also, some of the classpath information is in build.xml and some is in build.xxx, which is not ideal.
>
>This means updating some classpath definitions, and adding a classpath for Anakia. I've got this working on Windows XP.
>I've not yet had a chance to try this on Unix, but when I do, I can post a patch to Buzilla - unless there are any
>objections/further suggestions?
>  
>



---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: Source dist build

Posted by Jeremy Arnold <je...@bigfoot.com>.
Hello,
    In principle, I agree with everything Sebastian said...at least for 
the src versions.  (I still have to think about his comments on the 
binary versions.)  Mike is correct about JMeter having a different 
target audience than most Jakarta projects.  However, I think that the 
assumptions change a bit for source distributions -- if you download the 
source distribution you fall into the "developer" category rather than 
just a "user".  Of course, if we split the binary distribution as 
Sebastian suggested, the "external jars" part could be used with the 
source distribution too, which might simplify things a bit.

    In spite of this, I'll still give my thumbs up to Mike for including 
the external jars in the source distribution for this release.  This 
matches (I think) how it's been done in past JMeter releases, and it is 
also the simplest option.  Since we're already a few days behind the 
binary 1.9 release, simple is good.  Let's wait for the next release 
before we make big changes to how we distribute everything.

Jeremy


Sebastian Bazley wrote:

>I agree that src_dist should just include the source files, and not any external jars, and should ideally only include the xml docs,
>not the generated html ones.
>
>Building JMeter would obviously require the run-time jars, but these would be needed for the binary distribution as well - I don't
>see any point in including them in both source and binary distributions.
>
>It may be worth splitting the binary distribution into two:
>- the external jars
>- the rest of the existing binary distribution. [Perhaps remove the HTML files from docs, and keep just the printable_docs
>versions?]
>
>As to including Ant in the distribution, I agree with Jeremy that it should not be included.
>It seems to me that developers are likely to have this installed anyway, and Ant is surely now stable enough that any recent version
>will build JMeter.
>
>As far as I can tell, the only other external dependency is Anakia (Velocity), which is needed to create the documentation.
>This could perhaps be included in the source distribution, but I think it would be better just to leave up to developers to download
>it separately. As with Ant, they might already have it. The build.xml file can be updated to allow anakia to be picked up from lib/
>and/or ../jakarta-site2/.
>
>Which reminds me: I would like to propose some enhancements to build.xml:
>
>- allow the source/binary distributions to be created without rebuilding everything. This can be done by introducing two new targets
>which just do the tar/[g]zip etc. These targets could be marked "internal", i.e. no description.
>E.g. Make "dist" depend on "dist_tar", and move the tar/gzip/zip to dist_tar. Similarly for src_dist.
>[The original dist and src_dist targets would still do the same work, but refactored.]
>
>- similarly, add a target (test_only ?) that can be used to test JMeter without needing to do a full build.
>At present, this means one cannot test a binary JMeter distribution.
>[One would need to include build.xml and bin/testfiles in binary distributions.]
>
>- be able to use build.xml without needing build.sh or build.bat (e.g. use "ant [target]" or Eclipse)
>As it stands, build.bat does not agree with build.sh on where to look for libraries.
>Also, some of the classpath information is in build.xml and some is in build.xxx, which is not ideal.
>
>This means updating some classpath definitions, and adding a classpath for Anakia. I've got this working on Windows XP.
>I've not yet had a chance to try this on Unix, but when I do, I can post a patch to Buzilla - unless there are any
>objections/further suggestions?
>  
>



Re: JMeter 2.0 [Re: Source dist build]

Posted by Jordi Salvat i Alabart <js...@atg.com>.

Jeremy Arnold wrote:
>  [...] But since most threads are sleeping most 
> of the time, perhaps we can come up with some sort of thread pool, so 
> that a large number of JMeter "threads" (perhaps better to call them 
> "users" in this case) could be handled by a smaller number of JVM 
> threads.  It could be a bit tricky to ensure that we have the right 
> number of JVM threads to handle the JMeter users, and that samples are 
> executed when they are supposed to.  But it seems like there could be 
> potential.

I usually prefer to leave the low-level stuff at the low-level layer: 
the JVM makes should care about threading efficiency (actually it looks 
like they do, see 
http://developer.java.sun.com/developer/technicalArticles/JavaTechandLinux/RedHat/ 
).

> When I read Jordi's message, I thought he was referring to have a system 
> dedicated to performance regression tests, so that we can see the 
> effects of changes to JMeter on its performance.  For example, if we 
> start messing with a thread pool, we would need to be certain that we 
> weren't impacting the results (at least not negatively -- but even if we 
> made an improvement it would be good to document that).
> 

Yes, that's exactly what I meant. Thanks for the clarification.

> Seems like we've got some high hopes for JMeter 2.0...even in just a 
> short discussion -- I'm looking forward to getting started on it.
> 
> Jeremy
> http://xirr.com/~jeremy_a
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 
> 

-- 
Salut,

Jordi.


Re: JMeter 2.0 [Re: Source dist build]

Posted by Jordi Salvat i Alabart <js...@atg.com>.

Jeremy Arnold wrote:
>  [...] But since most threads are sleeping most 
> of the time, perhaps we can come up with some sort of thread pool, so 
> that a large number of JMeter "threads" (perhaps better to call them 
> "users" in this case) could be handled by a smaller number of JVM 
> threads.  It could be a bit tricky to ensure that we have the right 
> number of JVM threads to handle the JMeter users, and that samples are 
> executed when they are supposed to.  But it seems like there could be 
> potential.

I usually prefer to leave the low-level stuff at the low-level layer: 
the JVM makes should care about threading efficiency (actually it looks 
like they do, see 
http://developer.java.sun.com/developer/technicalArticles/JavaTechandLinux/RedHat/ 
).

> When I read Jordi's message, I thought he was referring to have a system 
> dedicated to performance regression tests, so that we can see the 
> effects of changes to JMeter on its performance.  For example, if we 
> start messing with a thread pool, we would need to be certain that we 
> weren't impacting the results (at least not negatively -- but even if we 
> made an improvement it would be good to document that).
> 

Yes, that's exactly what I meant. Thanks for the clarification.

> Seems like we've got some high hopes for JMeter 2.0...even in just a 
> short discussion -- I'm looking forward to getting started on it.
> 
> Jeremy
> http://xirr.com/~jeremy_a
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 
> 

-- 
Salut,

Jordi.


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: JMeter 2.0 [Re: Source dist build]

Posted by Jeremy Arnold <je...@bigfoot.com>.
It's always nice to see other people thinking in the same general 
directions that I am.

I think Jordi is on the right track about having a separate analysis 
component.  I would like to keep the Visualizers out of the Test Plan -- 
leave the Test Plan with the job of describing the test.  Make some 
basic statistics available at runtime during the test -- how many 
samples passed/failed, an estimate of throughput and response time, and 
perhaps some other data which can be calculated or estimated cheaply at 
runtime (including with remote engines).  Have each engine track more 
detailed data which can be aggregated at the end of the run, and then 
more detailed analysis can be done on this data.

Obviously there are some cases that have to be treated specially -- some 
data is expensive to collect, so you wouldn't necessarily want to store 
it unless the user specifically requested it.

Another extension I would like to see is pluggable module to provide 
extra data which can be correlated with the data that JMeter collects.  
One such module would be to get the CPU utilization on the remote server 
system.  Another could get performance statistics from a Tomcat server.  
Or a WebSphere server.  Or whatever else somebody felt was useful enough 
to write a module for.  JMeter wouldn't need to know the details about 
what is being stored...we just have to develop some kind of generic way 
to store it.


Regarding single threaded operation:  I think single threaded would 
probably not be a good idea.  But since most threads are sleeping most 
of the time, perhaps we can come up with some sort of thread pool, so 
that a large number of JMeter "threads" (perhaps better to call them 
"users" in this case) could be handled by a smaller number of JVM 
threads.  It could be a bit tricky to ensure that we have the right 
number of JVM threads to handle the JMeter users, and that samples are 
executed when they are supposed to.  But it seems like there could be 
potential.

>>Some performance and accuracy tests would also be great. I'm thinking on 
>>how to do those. An important bit would be unused hardware available for 
>>a long term for this purpose only (or almost)... I think I can provide this.
>>    
>>
>
>I've used various techniques to ensure the accuracy of my numbers - primarily to run an extra 
>test client with a very low load and comparing its numbers to the numbers of the high-load 
>clients.  I think the best way to handle it is through documentation to explain these techniques 
>and other ways of analyzing data.  Another way to help might be a visualizer that shows 
>samples as a line that demonstrates it's beginning time and end time, making it easy to see 
>overlapping samples, and thus see potential timing conflicts.
>  
>
When I read Jordi's message, I thought he was referring to have a system 
dedicated to performance regression tests, so that we can see the 
effects of changes to JMeter on its performance.  For example, if we 
start messing with a thread pool, we would need to be certain that we 
weren't impacting the results (at least not negatively -- but even if we 
made an improvement it would be good to document that).


Seems like we've got some high hopes for JMeter 2.0...even in just a 
short discussion -- I'm looking forward to getting started on it.

Jeremy
http://xirr.com/~jeremy_a



Re: JMeter 2.0 [Re: Source dist build]

Posted by Jordi Salvat i Alabart <js...@atg.com>.

mstover1@apache.org wrote:
> Lots of little things like the drag and drop need polishing - I'd prefer to be able to drag and drop 
> multiple files at once, for instance.  I'm not sure exactly what you are referring to with Eclipse (I 
> don't find myself dragging files around in Eclipse), but I imagine you are thinking of a system 
> whereby visual cues are provided to indicate whether you're about to drop an element into, 
> above, or below a tree node.  I wouldn't think that would be too hard.

Sorry -- my mistake: Eclipse does _not_ do that. But you understood what 
I meant. For an example, see Mozilla 1.4 bookmark management (checked 
this one this time). You're right drag'n'drop of multiple files is a must.

> Yes, and maybe automatic adding of Cookie Managers to plans that include HTTPSamplers?
> 

Or a wizard that sets things up for HTTP work?
- Ask for server & port (default 80).
- Ask whether you want the script to get images, applets, etc. or not
- Set up Thread Group, Recording Controller, one listener of choice, 
Cookie Manager.
- Set up Proxy with appropriate stuff depending on inputs above.

>>How about leaving listeners for real-time test result visualization & 
>>test result gathering/saving and having a separate application (or 
>>module) for more complex data analysis. Maybe there's something in the 
>>non-market we can use straight away?
> 
> 
> Sounds great.
> 

I'll start a search.

>>Instead, I would focus into accuracy by raising priority of threads 
>>during actual sampling. Would not improve total performance in terms of 
>>max throughput, but would improve measurement accuracy at mid and high 
>>loads.
> 
> 
> I've thought about this but I don't think it scales up very high.  The majority of any of JMeter's 
> threads time is spent sleeping, either in timer delay, or waiting for IO.  Giving all your IO 
> waiting threads a higher priority doesn't help much.  I also think it might worsen things to make 
> a bunch of threads sitting on IO calls the highest priority!
> 

It should not worsen things much: a sleeping thread is a sleeping 
thread, no matter at which priority. Only that once it wakes up, it 
would run with a minimum of obstacles to completion of the sampling.

Of course it would not improve throughput at all -- if anything it would 
reduce it slightly, because switching priorities has a cost, even if 
small. But accuracy at high loads could improve significantly.

> You mention socket factories - is it possible for JMeter to control all sockets created within the 
> JVM?

I have no idea. Was just a shot in the dark, but I'll research this.

-- 
Salut,

Jordi.


Re: JMeter 2.0 [Re: Source dist build]

Posted by Jeremy Arnold <je...@bigfoot.com>.
It's always nice to see other people thinking in the same general 
directions that I am.

I think Jordi is on the right track about having a separate analysis 
component.  I would like to keep the Visualizers out of the Test Plan -- 
leave the Test Plan with the job of describing the test.  Make some 
basic statistics available at runtime during the test -- how many 
samples passed/failed, an estimate of throughput and response time, and 
perhaps some other data which can be calculated or estimated cheaply at 
runtime (including with remote engines).  Have each engine track more 
detailed data which can be aggregated at the end of the run, and then 
more detailed analysis can be done on this data.

Obviously there are some cases that have to be treated specially -- some 
data is expensive to collect, so you wouldn't necessarily want to store 
it unless the user specifically requested it.

Another extension I would like to see is pluggable module to provide 
extra data which can be correlated with the data that JMeter collects.  
One such module would be to get the CPU utilization on the remote server 
system.  Another could get performance statistics from a Tomcat server.  
Or a WebSphere server.  Or whatever else somebody felt was useful enough 
to write a module for.  JMeter wouldn't need to know the details about 
what is being stored...we just have to develop some kind of generic way 
to store it.


Regarding single threaded operation:  I think single threaded would 
probably not be a good idea.  But since most threads are sleeping most 
of the time, perhaps we can come up with some sort of thread pool, so 
that a large number of JMeter "threads" (perhaps better to call them 
"users" in this case) could be handled by a smaller number of JVM 
threads.  It could be a bit tricky to ensure that we have the right 
number of JVM threads to handle the JMeter users, and that samples are 
executed when they are supposed to.  But it seems like there could be 
potential.

>>Some performance and accuracy tests would also be great. I'm thinking on 
>>how to do those. An important bit would be unused hardware available for 
>>a long term for this purpose only (or almost)... I think I can provide this.
>>    
>>
>
>I've used various techniques to ensure the accuracy of my numbers - primarily to run an extra 
>test client with a very low load and comparing its numbers to the numbers of the high-load 
>clients.  I think the best way to handle it is through documentation to explain these techniques 
>and other ways of analyzing data.  Another way to help might be a visualizer that shows 
>samples as a line that demonstrates it's beginning time and end time, making it easy to see 
>overlapping samples, and thus see potential timing conflicts.
>  
>
When I read Jordi's message, I thought he was referring to have a system 
dedicated to performance regression tests, so that we can see the 
effects of changes to JMeter on its performance.  For example, if we 
start messing with a thread pool, we would need to be certain that we 
weren't impacting the results (at least not negatively -- but even if we 
made an improvement it would be good to document that).


Seems like we've got some high hopes for JMeter 2.0...even in just a 
short discussion -- I'm looking forward to getting started on it.

Jeremy
http://xirr.com/~jeremy_a



---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: JMeter 2.0 [Re: Source dist build]

Posted by Jordi Salvat i Alabart <js...@atg.com>.

mstover1@apache.org wrote:
> Lots of little things like the drag and drop need polishing - I'd prefer to be able to drag and drop 
> multiple files at once, for instance.  I'm not sure exactly what you are referring to with Eclipse (I 
> don't find myself dragging files around in Eclipse), but I imagine you are thinking of a system 
> whereby visual cues are provided to indicate whether you're about to drop an element into, 
> above, or below a tree node.  I wouldn't think that would be too hard.

Sorry -- my mistake: Eclipse does _not_ do that. But you understood what 
I meant. For an example, see Mozilla 1.4 bookmark management (checked 
this one this time). You're right drag'n'drop of multiple files is a must.

> Yes, and maybe automatic adding of Cookie Managers to plans that include HTTPSamplers?
> 

Or a wizard that sets things up for HTTP work?
- Ask for server & port (default 80).
- Ask whether you want the script to get images, applets, etc. or not
- Set up Thread Group, Recording Controller, one listener of choice, 
Cookie Manager.
- Set up Proxy with appropriate stuff depending on inputs above.

>>How about leaving listeners for real-time test result visualization & 
>>test result gathering/saving and having a separate application (or 
>>module) for more complex data analysis. Maybe there's something in the 
>>non-market we can use straight away?
> 
> 
> Sounds great.
> 

I'll start a search.

>>Instead, I would focus into accuracy by raising priority of threads 
>>during actual sampling. Would not improve total performance in terms of 
>>max throughput, but would improve measurement accuracy at mid and high 
>>loads.
> 
> 
> I've thought about this but I don't think it scales up very high.  The majority of any of JMeter's 
> threads time is spent sleeping, either in timer delay, or waiting for IO.  Giving all your IO 
> waiting threads a higher priority doesn't help much.  I also think it might worsen things to make 
> a bunch of threads sitting on IO calls the highest priority!
> 

It should not worsen things much: a sleeping thread is a sleeping 
thread, no matter at which priority. Only that once it wakes up, it 
would run with a minimum of obstacles to completion of the sampling.

Of course it would not improve throughput at all -- if anything it would 
reduce it slightly, because switching priorities has a cost, even if 
small. But accuracy at high loads could improve significantly.

> You mention socket factories - is it possible for JMeter to control all sockets created within the 
> JVM?

I have no idea. Was just a shot in the dark, but I'll research this.

-- 
Salut,

Jordi.


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: JMeter 2.0 [Re: Source dist build]

Posted by ms...@apache.org.
Great feedback, Jordi - responses below.

On 11 Aug 2003 at 10:15, Jordi Salvat i Alabart wrote:

> mstover1@apache.org wrote:
> > I've been using JMeter as a user quite a bit the past few weeks, and I've learned some things 
> > about it.  One is that it's very tedious to use, and so a lot of my thoughts have to do with 
> > creating more powerful tools to manipulate test scripts.  I think I'd like to introduce the idea 
of 
> > alternate ways to view a test plan, ala eclipse, so that different aspects of test plan editing 
can 
> > be brought to the forefront.
> > 
> 
> It's true that test editing is tedious, but I don't really see different 
> "aspects" in such a heavy way as Eclipse -- maybe visualization options?
> 
> Control vs. non-control elements: you had commented in the past about 
> control elements (controllers & samplers) vs. non-control elements 
> (where order essentially doesn't matter). Would be great to have an 
> option to show/hide those non-control elements when viewing the tree. 
> Also to see them in a separate panel showing all those applying to the 
> current control element -- with 'inherited' ones greyed out. Most 
> importantly because it would provide new (and not-so-new) users a 
> clearer view of which non-control elements apply to which control elements.
[reordering]
> Bulk editing: A find/replace feature the most obvious. Another nice one 
> could be to be able to select multiple test elements of the same type 
> and see the editor in the right panel show white fields for values that 
> are equal in all of them -- you could edit these straight-away -- and 
> fields with different values in grey -- possibly non-editable.

A perfect example - a view that shows you a slice of the test plan, by component type, and 
provides an easy way to edit all at once.  I would think that you'd want such code to not get 
mixed up with the existing GUI code, and thus it would be a separate module that provided you 
a different view of things.  Right now, too many elements are closely coupled in order to show 
the one particular view of things - JMeterTreeModel, JMeterTreeListener,GuiPackage, for 
instance.  The tree model should probably be a dumber data model that actors manipulate, 
and that would provide a good start toward implementing other views and editing options.

> 
> Tree editing: Eclipse trees have a nice way of indicating whether to 
> insert before, insert after, or add as child which would be very handy 
> -- our current way is a pain. I don't know if that's doable in Swing, 
> though.

Lots of little things like the drag and drop need polishing - I'd prefer to be able to drag and drop 
multiple files at once, for instance.  I'm not sure exactly what you are referring to with Eclipse (I 
don't find myself dragging files around in Eclipse), but I imagine you are thinking of a system 
whereby visual cues are provided to indicate whether you're about to drop an element into, 
above, or below a tree node.  I wouldn't think that would be too hard.

> 

> 
> Protocol pre-selection: by having options on which protocols we want to 
> use in the test we could avoid cluttering the menus with samplers & 
> config elements not applicable to those protocols.

Yes, and maybe automatic adding of Cookie Managers to plans that include HTTPSamplers?

> 
> Screen real-state usage: reducing font size, getting rid of useless 
> spacing, etc... so that more space is left for panels such as the HTTP 
> request parameters.

Absolutely - I figured people would complain if I changed the font size though.

> 
> Another usability issue: it would be really nice to have certain test 
> elements provide a "dynamically-generated" default name (used in case 
> you leave the Name field blank). E.g. "Timer: 1.5 sec.", "Timer: 
> 10.0±5.0 sec.", "/home/index.jsp",...
> 
> > Remote testing needs to be revamped because it's pointless to have 10 remote machines 
all 
> > trying to stuff responses down the I/O throat of a single controlling machine - better to have 
the 
> > remote machines keep the responses till the end and not risk the accuracy of throughput 
> > measurements.  Perhaps a simpler format can be created for remote testing whereby 
during 
> > the test only success/failure plus response time is sent to the controlling machine, and 
> > everything else waits to the end of the test.
> 
> I agree, but note that this means significant rewrite of all listeners, 
> so that they can handle this two-phase input and still show meaningful 
> results.

Or the SampleListener interface could be given an extra method: 
summarySampleOccurred(long time,boolean success);

Really, all we need to know is that the test is running and samples are happening.  And at the 
end of the test, an easy way to retrieve the entire, fully recorded results.  Which could be 
handled by your new analysis module.

> 
> > I want test results categorized by test run, and not just as a list of sampleResults.  A set 
of 
> > sample results has a metadata set that describes the test run, and JMeter should be able 
to 
> > use such metadata to potentially combine test run results and also display statistics 
> > comparing two test runs (ie, graphing # users vs throughput).  
> 
> How about leaving listeners for real-time test result visualization & 
> test result gathering/saving and having a separate application (or 
> module) for more complex data analysis. Maybe there's something in the 
> non-market we can use straight away?

Sounds great.

> 
> > Result files need to be abstract datasources with an interface that visualizers talk to 
without 
> > knowing whether the backing data is an XML file, a CSV file, a database, etc.  Right now, 
> > JMeter knows how to write CSV files, but can't read them!
> 
> Note this would make sense if we had the separate analysis application I 
> was talking about.
> 
> > A defined interface will help us 
> > modularize this code whereas currently it's mixed up with the code for reading and writing 
test 
> > plan files.
> > 
> > Visualizers should be able to output useful file types for distribution of results to non-jmeter 
> > users.  HTML and PNG files, for instance.  Some way of exporting the data to a format 
that 
> > can be easily posted.
> 
> Again, a separate analysis tool could take care of this.
> 
> > I wanted to make JMeter single threaded with the new non-blocking IO packages, but I 
don't 
> > think this is feasible.
> 
> Definitely not doable for the Java samplers. Extremely difficult for 
> JDBC, difficult and probably not worth it for the rest (just my view -- 
> seems to match your's though).
> 
> Instead, I would focus into accuracy by raising priority of threads 
> during actual sampling. Would not improve total performance in terms of 
> max throughput, but would improve measurement accuracy at mid and high 
> loads.

I've thought about this but I don't think it scales up very high.  The majority of any of JMeter's 
threads time is spent sleeping, either in timer delay, or waiting for IO.  Giving all your IO 
waiting threads a higher priority doesn't help much.  I also think it might worsen things to make 
a bunch of threads sitting on IO calls the highest priority!

> 
> Some performance and accuracy tests would also be great. I'm thinking on 
> how to do those. An important bit would be unused hardware available for 
> a long term for this purpose only (or almost)... I think I can provide this.

I've used various techniques to ensure the accuracy of my numbers - primarily to run an extra 
test client with a very low load and comparing its numbers to the numbers of the high-load 
clients.  I think the best way to handle it is through documentation to explain these techniques 
and other ways of analyzing data.  Another way to help might be a visualizer that shows 
samples as a line that demonstrates it's beginning time and end time, making it easy to see 
overlapping samples, and thus see potential timing conflicts.

> 
> >  It's possible to do if you can get access to the very sockets that do the 
> > communicating, but how will you get that for jdbc drivers?  Even for HTTP, we'd have to 
write 
> > our own HTTP Client from which we could gain access to the socket being used and 
control 
> > the IO for it (or take the commons client and modify it so).  Because to put it all in a single 
> > threaded model, we'd have to take control of the IO part, and force the samplers to hand 
their 
> > sockets to some central code that would take the socket, take the bytes the sampler 
wants to 
> > send, and it would hand back the return bytes plus timing info.  It'd be nice, but I don't 
think it's 
> > feasible for most protocols.
> > 
> > JMeter needs to collect more data.  Size of responses should be explicitly collected to 
help 
> > throughput calculations of the form bytes/second.  Timing data should include a latency 
> > measurement in addition to the whole response time.
> 
> Totally agree. The complete split would be:
> 1- DNS resolution time
> 2- Connection set-up time (SYN to SYN ACK)
> 3- Request transmission time (SYN ACK to ACK of last request packet)
> 4- Latency (ACK of last request packet to 1st response data packet)
> 5- Response reception time
> I'm not sure JMeter is the tool to separate 1,2,3 (this is more of an 
> infrastructure-level thing rather than application-level), but 1+2+3+4 
> separate from 5 is a must. Top commercial tools separate them all.

You mention socket factories - is it possible for JMeter to control all sockets created within the 
JVM? And, if so, couldn't JMeter by that means take control of the low level input and output?  
The question then becomes, how do we match up this data from the low level socket control to 
the Sampler responsible for the data?

> 
> More accurate simulation of browser behaviour in terms of # of 
> concurrent connections, keep-alives, etc. would also be great. Even in 
> terms of available bandwidth: simulating modem/ISDN/ADSL users. Again, 
> this may not be JMeter's job -- application-level testing is more 
> important, IMO.
> 
> The problem is same as above: this requires access to the internals of 
> the client code. How to do this for JDBC? Maybe changing socket 
> factories? But it's a must, so we need to think about it.
> 
> >  Multiple SampleResponses need to be 
> > dealt with better - I'm thinking that instead of an API that looks like:
> > 
> > Sampler{
> >    SampleResult sample();
> > }
> > 
> > We need one that's more based on a callback situation:
> > Sampler {
> >    void sample(SendResultsHereService callback);
> > }
>  >
> > so that Samplers can send multiple results to the collector service.  This would make 
> > samplers more flexible for when scripting in python is allowed - to allow the adhoc scripter 
to 
> > push out sample results at any time during their script.
> > 
> I feel pushing out multiple separate samples belongs more to controller 
> land rather than sampler land...

Good point - I'm all in favor of controller's sending out sampleresult events.

> 
> > Given this, post-processors like assertions and post-processors need a way to know 
which 
> > result to apply themselves to.  We already have this problem wherein redirected samples 
> > confuse these components.  We need a way to either mark a particular response as "the 
main 
> > one" or define a response set all of which need to be tested by the applicable post-
processors.
> 
> Isn't the current "sample-tree" structure correct for this? Wouldn't it 
> be enough to have post-processors, listeners,etc. know about such 
> "structured" sample results?

You're probably right.
> 
> > I'd also like to replace the Avalon Configuration stuff with something that can load files 
more 
> > stream-like and piecemeal, instead of creating a DOM and then handing it over to JMeter.  
It 
> > goes too long without any feedback for the user.  Plus uses a ton of memory.
> 
> Maybe javax.beans.XMLEncoder/Decoder can help? (Never used it, just 
> adding it to the long list).
> 
> > Sun's HTTP Client should be replaced.  As the cornerstone of JMeter, we ought to have 
one 
> > that is highly flexible to our needs, provides the most accurate timing it can, the most 
> > performance possible, the least resource intensive as possible, and the most transparency 
to 
> > JMeter's controlling code.  I think the commons HTTP Client is probably a good place to 
start, 
> > being open-source, we can craft it to our needs.
> 
> Totally agree that it needs to be replaced and that the HTTP Client is 
> our best bet.

Seems like we all think that.

-Mike

> 
> > Well, that's a start :-)
> > 
> -- 
> Salut,
> 
> Jordi.
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 




--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

Re: JMeter 2.0 [Re: Source dist build]

Posted by Jordi Salvat i Alabart <js...@atg.com>.
mstover1@apache.org wrote:
> I've been using JMeter as a user quite a bit the past few weeks, and I've learned some things 
> about it.  One is that it's very tedious to use, and so a lot of my thoughts have to do with 
> creating more powerful tools to manipulate test scripts.  I think I'd like to introduce the idea of 
> alternate ways to view a test plan, ala eclipse, so that different aspects of test plan editing can 
> be brought to the forefront.
> 

It's true that test editing is tedious, but I don't really see different 
"aspects" in such a heavy way as Eclipse -- maybe visualization options?

Control vs. non-control elements: you had commented in the past about 
control elements (controllers & samplers) vs. non-control elements 
(where order essentially doesn't matter). Would be great to have an 
option to show/hide those non-control elements when viewing the tree. 
Also to see them in a separate panel showing all those applying to the 
current control element -- with 'inherited' ones greyed out. Most 
importantly because it would provide new (and not-so-new) users a 
clearer view of which non-control elements apply to which control elements.

Tree editing: Eclipse trees have a nice way of indicating whether to 
insert before, insert after, or add as child which would be very handy 
-- our current way is a pain. I don't know if that's doable in Swing, 
though.

Bulk editing: A find/replace feature the most obvious. Another nice one 
could be to be able to select multiple test elements of the same type 
and see the editor in the right panel show white fields for values that 
are equal in all of them -- you could edit these straight-away -- and 
fields with different values in grey -- possibly non-editable.

Protocol pre-selection: by having options on which protocols we want to 
use in the test we could avoid cluttering the menus with samplers & 
config elements not applicable to those protocols.

Screen real-state usage: reducing font size, getting rid of useless 
spacing, etc... so that more space is left for panels such as the HTTP 
request parameters.

Another usability issue: it would be really nice to have certain test 
elements provide a "dynamically-generated" default name (used in case 
you leave the Name field blank). E.g. "Timer: 1.5 sec.", "Timer: 
10.0±5.0 sec.", "/home/index.jsp",...

> Remote testing needs to be revamped because it's pointless to have 10 remote machines all 
> trying to stuff responses down the I/O throat of a single controlling machine - better to have the 
> remote machines keep the responses till the end and not risk the accuracy of throughput 
> measurements.  Perhaps a simpler format can be created for remote testing whereby during 
> the test only success/failure plus response time is sent to the controlling machine, and 
> everything else waits to the end of the test.

I agree, but note that this means significant rewrite of all listeners, 
so that they can handle this two-phase input and still show meaningful 
results.

> I want test results categorized by test run, and not just as a list of sampleResults.  A set of 
> sample results has a metadata set that describes the test run, and JMeter should be able to 
> use such metadata to potentially combine test run results and also display statistics 
> comparing two test runs (ie, graphing # users vs throughput).  

How about leaving listeners for real-time test result visualization & 
test result gathering/saving and having a separate application (or 
module) for more complex data analysis. Maybe there's something in the 
non-market we can use straight away?

> Result files need to be abstract datasources with an interface that visualizers talk to without 
> knowing whether the backing data is an XML file, a CSV file, a database, etc.  Right now, 
> JMeter knows how to write CSV files, but can't read them!

Note this would make sense if we had the separate analysis application I 
was talking about.

> A defined interface will help us 
> modularize this code whereas currently it's mixed up with the code for reading and writing test 
> plan files.
> 
> Visualizers should be able to output useful file types for distribution of results to non-jmeter 
> users.  HTML and PNG files, for instance.  Some way of exporting the data to a format that 
> can be easily posted.

Again, a separate analysis tool could take care of this.

> I wanted to make JMeter single threaded with the new non-blocking IO packages, but I don't 
> think this is feasible.

Definitely not doable for the Java samplers. Extremely difficult for 
JDBC, difficult and probably not worth it for the rest (just my view -- 
seems to match your's though).

Instead, I would focus into accuracy by raising priority of threads 
during actual sampling. Would not improve total performance in terms of 
max throughput, but would improve measurement accuracy at mid and high 
loads.

Some performance and accuracy tests would also be great. I'm thinking on 
how to do those. An important bit would be unused hardware available for 
a long term for this purpose only (or almost)... I think I can provide this.

>  It's possible to do if you can get access to the very sockets that do the 
> communicating, but how will you get that for jdbc drivers?  Even for HTTP, we'd have to write 
> our own HTTP Client from which we could gain access to the socket being used and control 
> the IO for it (or take the commons client and modify it so).  Because to put it all in a single 
> threaded model, we'd have to take control of the IO part, and force the samplers to hand their 
> sockets to some central code that would take the socket, take the bytes the sampler wants to 
> send, and it would hand back the return bytes plus timing info.  It'd be nice, but I don't think it's 
> feasible for most protocols.
> 
> JMeter needs to collect more data.  Size of responses should be explicitly collected to help 
> throughput calculations of the form bytes/second.  Timing data should include a latency 
> measurement in addition to the whole response time.

Totally agree. The complete split would be:
1- DNS resolution time
2- Connection set-up time (SYN to SYN ACK)
3- Request transmission time (SYN ACK to ACK of last request packet)
4- Latency (ACK of last request packet to 1st response data packet)
5- Response reception time
I'm not sure JMeter is the tool to separate 1,2,3 (this is more of an 
infrastructure-level thing rather than application-level), but 1+2+3+4 
separate from 5 is a must. Top commercial tools separate them all.

More accurate simulation of browser behaviour in terms of # of 
concurrent connections, keep-alives, etc. would also be great. Even in 
terms of available bandwidth: simulating modem/ISDN/ADSL users. Again, 
this may not be JMeter's job -- application-level testing is more 
important, IMO.

The problem is same as above: this requires access to the internals of 
the client code. How to do this for JDBC? Maybe changing socket 
factories? But it's a must, so we need to think about it.

>  Multiple SampleResponses need to be 
> dealt with better - I'm thinking that instead of an API that looks like:
> 
> Sampler{
>    SampleResult sample();
> }
> 
> We need one that's more based on a callback situation:
> Sampler {
>    void sample(SendResultsHereService callback);
> }
 >
> so that Samplers can send multiple results to the collector service.  This would make 
> samplers more flexible for when scripting in python is allowed - to allow the adhoc scripter to 
> push out sample results at any time during their script.
> 
I feel pushing out multiple separate samples belongs more to controller 
land rather than sampler land...

> Given this, post-processors like assertions and post-processors need a way to know which 
> result to apply themselves to.  We already have this problem wherein redirected samples 
> confuse these components.  We need a way to either mark a particular response as "the main 
> one" or define a response set all of which need to be tested by the applicable post-processors.

Isn't the current "sample-tree" structure correct for this? Wouldn't it 
be enough to have post-processors, listeners,etc. know about such 
"structured" sample results?

> I'd also like to replace the Avalon Configuration stuff with something that can load files more 
> stream-like and piecemeal, instead of creating a DOM and then handing it over to JMeter.  It 
> goes too long without any feedback for the user.  Plus uses a ton of memory.

Maybe javax.beans.XMLEncoder/Decoder can help? (Never used it, just 
adding it to the long list).

> Sun's HTTP Client should be replaced.  As the cornerstone of JMeter, we ought to have one 
> that is highly flexible to our needs, provides the most accurate timing it can, the most 
> performance possible, the least resource intensive as possible, and the most transparency to 
> JMeter's controlling code.  I think the commons HTTP Client is probably a good place to start, 
> being open-source, we can craft it to our needs.

Totally agree that it needs to be replaced and that the HTTP Client is 
our best bet.

> Well, that's a start :-)
> 
-- 
Salut,

Jordi.


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: JMeter 2.0 [Re: Source dist build]

Posted by Jordi Salvat i Alabart <js...@atg.com>.
mstover1@apache.org wrote:
> I've been using JMeter as a user quite a bit the past few weeks, and I've learned some things 
> about it.  One is that it's very tedious to use, and so a lot of my thoughts have to do with 
> creating more powerful tools to manipulate test scripts.  I think I'd like to introduce the idea of 
> alternate ways to view a test plan, ala eclipse, so that different aspects of test plan editing can 
> be brought to the forefront.
> 

It's true that test editing is tedious, but I don't really see different 
"aspects" in such a heavy way as Eclipse -- maybe visualization options?

Control vs. non-control elements: you had commented in the past about 
control elements (controllers & samplers) vs. non-control elements 
(where order essentially doesn't matter). Would be great to have an 
option to show/hide those non-control elements when viewing the tree. 
Also to see them in a separate panel showing all those applying to the 
current control element -- with 'inherited' ones greyed out. Most 
importantly because it would provide new (and not-so-new) users a 
clearer view of which non-control elements apply to which control elements.

Tree editing: Eclipse trees have a nice way of indicating whether to 
insert before, insert after, or add as child which would be very handy 
-- our current way is a pain. I don't know if that's doable in Swing, 
though.

Bulk editing: A find/replace feature the most obvious. Another nice one 
could be to be able to select multiple test elements of the same type 
and see the editor in the right panel show white fields for values that 
are equal in all of them -- you could edit these straight-away -- and 
fields with different values in grey -- possibly non-editable.

Protocol pre-selection: by having options on which protocols we want to 
use in the test we could avoid cluttering the menus with samplers & 
config elements not applicable to those protocols.

Screen real-state usage: reducing font size, getting rid of useless 
spacing, etc... so that more space is left for panels such as the HTTP 
request parameters.

Another usability issue: it would be really nice to have certain test 
elements provide a "dynamically-generated" default name (used in case 
you leave the Name field blank). E.g. "Timer: 1.5 sec.", "Timer: 
10.0±5.0 sec.", "/home/index.jsp",...

> Remote testing needs to be revamped because it's pointless to have 10 remote machines all 
> trying to stuff responses down the I/O throat of a single controlling machine - better to have the 
> remote machines keep the responses till the end and not risk the accuracy of throughput 
> measurements.  Perhaps a simpler format can be created for remote testing whereby during 
> the test only success/failure plus response time is sent to the controlling machine, and 
> everything else waits to the end of the test.

I agree, but note that this means significant rewrite of all listeners, 
so that they can handle this two-phase input and still show meaningful 
results.

> I want test results categorized by test run, and not just as a list of sampleResults.  A set of 
> sample results has a metadata set that describes the test run, and JMeter should be able to 
> use such metadata to potentially combine test run results and also display statistics 
> comparing two test runs (ie, graphing # users vs throughput).  

How about leaving listeners for real-time test result visualization & 
test result gathering/saving and having a separate application (or 
module) for more complex data analysis. Maybe there's something in the 
non-market we can use straight away?

> Result files need to be abstract datasources with an interface that visualizers talk to without 
> knowing whether the backing data is an XML file, a CSV file, a database, etc.  Right now, 
> JMeter knows how to write CSV files, but can't read them!

Note this would make sense if we had the separate analysis application I 
was talking about.

> A defined interface will help us 
> modularize this code whereas currently it's mixed up with the code for reading and writing test 
> plan files.
> 
> Visualizers should be able to output useful file types for distribution of results to non-jmeter 
> users.  HTML and PNG files, for instance.  Some way of exporting the data to a format that 
> can be easily posted.

Again, a separate analysis tool could take care of this.

> I wanted to make JMeter single threaded with the new non-blocking IO packages, but I don't 
> think this is feasible.

Definitely not doable for the Java samplers. Extremely difficult for 
JDBC, difficult and probably not worth it for the rest (just my view -- 
seems to match your's though).

Instead, I would focus into accuracy by raising priority of threads 
during actual sampling. Would not improve total performance in terms of 
max throughput, but would improve measurement accuracy at mid and high 
loads.

Some performance and accuracy tests would also be great. I'm thinking on 
how to do those. An important bit would be unused hardware available for 
a long term for this purpose only (or almost)... I think I can provide this.

>  It's possible to do if you can get access to the very sockets that do the 
> communicating, but how will you get that for jdbc drivers?  Even for HTTP, we'd have to write 
> our own HTTP Client from which we could gain access to the socket being used and control 
> the IO for it (or take the commons client and modify it so).  Because to put it all in a single 
> threaded model, we'd have to take control of the IO part, and force the samplers to hand their 
> sockets to some central code that would take the socket, take the bytes the sampler wants to 
> send, and it would hand back the return bytes plus timing info.  It'd be nice, but I don't think it's 
> feasible for most protocols.
> 
> JMeter needs to collect more data.  Size of responses should be explicitly collected to help 
> throughput calculations of the form bytes/second.  Timing data should include a latency 
> measurement in addition to the whole response time.

Totally agree. The complete split would be:
1- DNS resolution time
2- Connection set-up time (SYN to SYN ACK)
3- Request transmission time (SYN ACK to ACK of last request packet)
4- Latency (ACK of last request packet to 1st response data packet)
5- Response reception time
I'm not sure JMeter is the tool to separate 1,2,3 (this is more of an 
infrastructure-level thing rather than application-level), but 1+2+3+4 
separate from 5 is a must. Top commercial tools separate them all.

More accurate simulation of browser behaviour in terms of # of 
concurrent connections, keep-alives, etc. would also be great. Even in 
terms of available bandwidth: simulating modem/ISDN/ADSL users. Again, 
this may not be JMeter's job -- application-level testing is more 
important, IMO.

The problem is same as above: this requires access to the internals of 
the client code. How to do this for JDBC? Maybe changing socket 
factories? But it's a must, so we need to think about it.

>  Multiple SampleResponses need to be 
> dealt with better - I'm thinking that instead of an API that looks like:
> 
> Sampler{
>    SampleResult sample();
> }
> 
> We need one that's more based on a callback situation:
> Sampler {
>    void sample(SendResultsHereService callback);
> }
 >
> so that Samplers can send multiple results to the collector service.  This would make 
> samplers more flexible for when scripting in python is allowed - to allow the adhoc scripter to 
> push out sample results at any time during their script.
> 
I feel pushing out multiple separate samples belongs more to controller 
land rather than sampler land...

> Given this, post-processors like assertions and post-processors need a way to know which 
> result to apply themselves to.  We already have this problem wherein redirected samples 
> confuse these components.  We need a way to either mark a particular response as "the main 
> one" or define a response set all of which need to be tested by the applicable post-processors.

Isn't the current "sample-tree" structure correct for this? Wouldn't it 
be enough to have post-processors, listeners,etc. know about such 
"structured" sample results?

> I'd also like to replace the Avalon Configuration stuff with something that can load files more 
> stream-like and piecemeal, instead of creating a DOM and then handing it over to JMeter.  It 
> goes too long without any feedback for the user.  Plus uses a ton of memory.

Maybe javax.beans.XMLEncoder/Decoder can help? (Never used it, just 
adding it to the long list).

> Sun's HTTP Client should be replaced.  As the cornerstone of JMeter, we ought to have one 
> that is highly flexible to our needs, provides the most accurate timing it can, the most 
> performance possible, the least resource intensive as possible, and the most transparency to 
> JMeter's controlling code.  I think the commons HTTP Client is probably a good place to start, 
> being open-source, we can craft it to our needs.

Totally agree that it needs to be replaced and that the HTTP Client is 
our best bet.

> Well, that's a start :-)
> 
-- 
Salut,

Jordi.


Re: JMeter 2.0 [Re: Source dist build]

Posted by ms...@apache.org.
I've been using JMeter as a user quite a bit the past few weeks, and I've learned some things 
about it.  One is that it's very tedious to use, and so a lot of my thoughts have to do with 
creating more powerful tools to manipulate test scripts.  I think I'd like to introduce the idea of 
alternate ways to view a test plan, ala eclipse, so that different aspects of test plan editing can 
be brought to the forefront.

Remote testing needs to be revamped because it's pointless to have 10 remote machines all 
trying to stuff responses down the I/O throat of a single controlling machine - better to have the 
remote machines keep the responses till the end and not risk the accuracy of throughput 
measurements.  Perhaps a simpler format can be created for remote testing whereby during 
the test only success/failure plus response time is sent to the controlling machine, and 
everything else waits to the end of the test.

I want test results categorized by test run, and not just as a list of sampleResults.  A set of 
sample results has a metadata set that describes the test run, and JMeter should be able to 
use such metadata to potentially combine test run results and also display statistics 
comparing two test runs (ie, graphing # users vs throughput).  

Result files need to be abstract datasources with an interface that visualizers talk to without 
knowing whether the backing data is an XML file, a CSV file, a database, etc.  Right now, 
JMeter knows how to write CSV files, but can't read them!  A defined interface will help us 
modularize this code whereas currently it's mixed up with the code for reading and writing test 
plan files.

Visualizers should be able to output useful file types for distribution of results to non-jmeter 
users.  HTML and PNG files, for instance.  Some way of exporting the data to a format that 
can be easily posted.

I wanted to make JMeter single threaded with the new non-blocking IO packages, but I don't 
think this is feasible.  It's possible to do if you can get access to the very sockets that do the 
communicating, but how will you get that for jdbc drivers?  Even for HTTP, we'd have to write 
our own HTTP Client from which we could gain access to the socket being used and control 
the IO for it (or take the commons client and modify it so).  Because to put it all in a single 
threaded model, we'd have to take control of the IO part, and force the samplers to hand their 
sockets to some central code that would take the socket, take the bytes the sampler wants to 
send, and it would hand back the return bytes plus timing info.  It'd be nice, but I don't think it's 
feasible for most protocols.

JMeter needs to collect more data.  Size of responses should be explicitly collected to help 
throughput calculations of the form bytes/second.  Timing data should include a latency 
measurement in addition to the whole response time.  Multiple SampleResponses need to be 
dealt with better - I'm thinking that instead of an API that looks like:

Sampler{
   SampleResult sample();
}

We need one that's more based on a callback situation:
Sampler {
   void sample(SendResultsHereService callback);
}

so that Samplers can send multiple results to the collector service.  This would make 
samplers more flexible for when scripting in python is allowed - to allow the adhoc scripter to 
push out sample results at any time during their script.

Given this, post-processors like assertions and post-processors need a way to know which 
result to apply themselves to.  We already have this problem wherein redirected samples 
confuse these components.  We need a way to either mark a particular response as "the main 
one" or define a response set all of which need to be tested by the applicable post-processors.

I'd also like to replace the Avalon Configuration stuff with something that can load files more 
stream-like and piecemeal, instead of creating a DOM and then handing it over to JMeter.  It 
goes too long without any feedback for the user.  Plus uses a ton of memory.

Sun's HTTP Client should be replaced.  As the cornerstone of JMeter, we ought to have one 
that is highly flexible to our needs, provides the most accurate timing it can, the most 
performance possible, the least resource intensive as possible, and the most transparency to 
JMeter's controlling code.  I think the commons HTTP Client is probably a good place to start, 
being open-source, we can craft it to our needs.

Well, that's a start :-)

-Mike

On 11 Aug 2003 at 2:35, Jordi Salvat i Alabart wrote:

> 
> 
> mstover1@apache.org wrote:
> > I'm pretty committed to the idea that JMeter 2.0 will be drastically different from JMeter 
1.9.  
> > So, feel free to make big changes.
> 
> Hey, Mike, we'd like to know what you're thinking about?
> 
> -- 
> Salut,
> 
> Jordi.
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 




--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: JMeter 2.0 [Re: Source dist build]

Posted by ms...@apache.org.
I've been using JMeter as a user quite a bit the past few weeks, and I've learned some things 
about it.  One is that it's very tedious to use, and so a lot of my thoughts have to do with 
creating more powerful tools to manipulate test scripts.  I think I'd like to introduce the idea of 
alternate ways to view a test plan, ala eclipse, so that different aspects of test plan editing can 
be brought to the forefront.

Remote testing needs to be revamped because it's pointless to have 10 remote machines all 
trying to stuff responses down the I/O throat of a single controlling machine - better to have the 
remote machines keep the responses till the end and not risk the accuracy of throughput 
measurements.  Perhaps a simpler format can be created for remote testing whereby during 
the test only success/failure plus response time is sent to the controlling machine, and 
everything else waits to the end of the test.

I want test results categorized by test run, and not just as a list of sampleResults.  A set of 
sample results has a metadata set that describes the test run, and JMeter should be able to 
use such metadata to potentially combine test run results and also display statistics 
comparing two test runs (ie, graphing # users vs throughput).  

Result files need to be abstract datasources with an interface that visualizers talk to without 
knowing whether the backing data is an XML file, a CSV file, a database, etc.  Right now, 
JMeter knows how to write CSV files, but can't read them!  A defined interface will help us 
modularize this code whereas currently it's mixed up with the code for reading and writing test 
plan files.

Visualizers should be able to output useful file types for distribution of results to non-jmeter 
users.  HTML and PNG files, for instance.  Some way of exporting the data to a format that 
can be easily posted.

I wanted to make JMeter single threaded with the new non-blocking IO packages, but I don't 
think this is feasible.  It's possible to do if you can get access to the very sockets that do the 
communicating, but how will you get that for jdbc drivers?  Even for HTTP, we'd have to write 
our own HTTP Client from which we could gain access to the socket being used and control 
the IO for it (or take the commons client and modify it so).  Because to put it all in a single 
threaded model, we'd have to take control of the IO part, and force the samplers to hand their 
sockets to some central code that would take the socket, take the bytes the sampler wants to 
send, and it would hand back the return bytes plus timing info.  It'd be nice, but I don't think it's 
feasible for most protocols.

JMeter needs to collect more data.  Size of responses should be explicitly collected to help 
throughput calculations of the form bytes/second.  Timing data should include a latency 
measurement in addition to the whole response time.  Multiple SampleResponses need to be 
dealt with better - I'm thinking that instead of an API that looks like:

Sampler{
   SampleResult sample();
}

We need one that's more based on a callback situation:
Sampler {
   void sample(SendResultsHereService callback);
}

so that Samplers can send multiple results to the collector service.  This would make 
samplers more flexible for when scripting in python is allowed - to allow the adhoc scripter to 
push out sample results at any time during their script.

Given this, post-processors like assertions and post-processors need a way to know which 
result to apply themselves to.  We already have this problem wherein redirected samples 
confuse these components.  We need a way to either mark a particular response as "the main 
one" or define a response set all of which need to be tested by the applicable post-processors.

I'd also like to replace the Avalon Configuration stuff with something that can load files more 
stream-like and piecemeal, instead of creating a DOM and then handing it over to JMeter.  It 
goes too long without any feedback for the user.  Plus uses a ton of memory.

Sun's HTTP Client should be replaced.  As the cornerstone of JMeter, we ought to have one 
that is highly flexible to our needs, provides the most accurate timing it can, the most 
performance possible, the least resource intensive as possible, and the most transparency to 
JMeter's controlling code.  I think the commons HTTP Client is probably a good place to start, 
being open-source, we can craft it to our needs.

Well, that's a start :-)

-Mike

On 11 Aug 2003 at 2:35, Jordi Salvat i Alabart wrote:

> 
> 
> mstover1@apache.org wrote:
> > I'm pretty committed to the idea that JMeter 2.0 will be drastically different from JMeter 
1.9.  
> > So, feel free to make big changes.
> 
> Hey, Mike, we'd like to know what you're thinking about?
> 
> -- 
> Salut,
> 
> Jordi.
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 




--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

JMeter 2.0 [Re: Source dist build]

Posted by Jordi Salvat i Alabart <js...@atg.com>.

mstover1@apache.org wrote:
> I'm pretty committed to the idea that JMeter 2.0 will be drastically different from JMeter 1.9.  
> So, feel free to make big changes.

Hey, Mike, we'd like to know what you're thinking about?

-- 
Salut,

Jordi.


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


JMeter 2.0 [Re: Source dist build]

Posted by Jordi Salvat i Alabart <js...@atg.com>.

mstover1@apache.org wrote:
> I'm pretty committed to the idea that JMeter 2.0 will be drastically different from JMeter 1.9.  
> So, feel free to make big changes.

Hey, Mike, we'd like to know what you're thinking about?

-- 
Salut,

Jordi.


Re: Source dist build

Posted by ms...@apache.org.
I think I will stay out of this discussion.  I just like things that work.  One thing, though - I think 
it's important to keep in mind that JMeter differs from most Jakarta projects in that it is a client 
side appliction, and not a developer's library, and not a server app.  It's primary target user is 
neither a developer nor a server admin, but just a user.  The binary distribution of JMeter must 
include everything required to run JMeter, IMO.  It's also useful that nightly tarballs can be 
compiled and installed without any trouble for users (via the "build.bat/build.sh" script).  
Though, it's probably a good idea to ensure that build.bat/build.sh is in no way required to do 
the build.

Beyond that, Maven sounds interesting, reworking the build.xml file is probably a good idea.  
I'm pretty committed to the idea that JMeter 2.0 will be drastically different from JMeter 1.9.  
So, feel free to make big changes.

-Mike


On 10 Aug 2003 at 19:40, Sebastian Bazley wrote:

> I agree that src_dist should just include the source files, and not any external jars, and should ideally only include the xml docs,
> not the generated html ones.
> 
> Building JMeter would obviously require the run-time jars, but these would be needed for the 
binary distribution as well - I don't
> see any point in including them in both source and binary distributions.
> 
> It may be worth splitting the binary distribution into two:
> - the external jars
> - the rest of the existing binary distribution. [Perhaps remove the HTML files from docs, and 
keep just the printable_docs
> versions?]
> 
> As to including Ant in the distribution, I agree with Jeremy that it should not be included.
> It seems to me that developers are likely to have this installed anyway, and Ant is surely 
now stable enough that any recent version
> will build JMeter.
> 
> As far as I can tell, the only other external dependency is Anakia (Velocity), which is needed 
to create the documentation.
> This could perhaps be included in the source distribution, but I think it would be better just to 
leave up to developers to download
> it separately. As with Ant, they might already have it. The build.xml file can be updated to 
allow anakia to be picked up from lib/
> and/or ../jakarta-site2/.
> 
> Which reminds me: I would like to propose some enhancements to build.xml:
> 
> - allow the source/binary distributions to be created without rebuilding everything. This can 
be done by introducing two new targets
> which just do the tar/[g]zip etc. These targets could be marked "internal", i.e. no description.
> E.g. Make "dist" depend on "dist_tar", and move the tar/gzip/zip to dist_tar. Similarly for 
src_dist.
> [The original dist and src_dist targets would still do the same work, but refactored.]
> 
> - similarly, add a target (test_only ?) that can be used to test JMeter without needing to do a 
full build.
> At present, this means one cannot test a binary JMeter distribution.
> [One would need to include build.xml and bin/testfiles in binary distributions.]
> 
> - be able to use build.xml without needing build.sh or build.bat (e.g. use "ant [target]" or 
Eclipse)
> As it stands, build.bat does not agree with build.sh on where to look for libraries.
> Also, some of the classpath information is in build.xml and some is in build.xxx, which is not 
ideal.
> 
> This means updating some classpath definitions, and adding a classpath for Anakia. I've got 
this working on Windows XP.
> I've not yet had a chance to try this on Unix, but when I do, I can post a patch to Buzilla - 
unless there are any
> objections/further suggestions?
> 
> Sebastian
> ----- Original Message ----- 
> From: <ms...@apache.org>
> To: "JMeter Developers List" <jm...@jakarta.apache.org>
> Sent: Sunday, August 10, 2003 2:47 PM
> Subject: Re: Source dist build
> 
> 
> Thanks to both Jeremy and Jordi for finding and fixing these problems.  I also
> fixed the lack of jdom.jar in the release.  The src files are out there, however,
> wouldn't it be preferable for the src dist to be a mirror of the files as they
> appear in CVS?  As it is now, someone can download the src, and they're
> really only half done in terms of getting everything they need to compile
> JMeter.  Plus the fact that the versions of libs they choose to download might
> differ from the versions that 1.9 uses, that seems like a potential problem.
> 
> I'm thinking src_dist should simply tar up all the cvs files as is and be done.
> What do you all think?
> 
> -Mike
> 
> On 10 Aug 2003 at 2:49, Jordi Salvat i Alabart wrote:
> 
> > Thanks Jeremy. It worked after initializing the variables.
> >
> > Unfortunately, solving this implies changes to the binary dist (although
> > only in unit test code).
> >
> > Mike: I'll check in the change -- for you to decide which release to
> > include it in.
> >
> > -- 
> > Salut,
> >
> > Jordi.
> >
> > Jeremy Arnold wrote:
> > > Jordi,
> > >    I took a quick look at the test failure log.  I suspect that you are
> > > correct that the problem is that the tests are executed in a different
> > > order.  PackageTest apparently assumes that the variables are already
> > > set.  I haven't tried to check what order the tests are executed in with
> > > the binary distribution, but I see that there are at least a couple of
> > > tests (org.apache.jmeter.engine.util.ValueReplacer.Test,
> > > org.apache.jmeter.extractor.RegexExtracter.Test) which initialize the
> > > variables, and a couple other places in the JMeter code that initialize
> > > them.  The best way to fix this is probably for PackageTest to
> > > initialize the variables itself.  That way the tests can run in any
> > > order without problems.
> > >
> > >    Regarding the missing lib/ant*.jar and other missing libraries:
> > > isn't part of the point of having a source distribution that it doesn't
> > > have all the extra binaries, so you have to either already have them or
> > > download them separately?  I just checked the source distributions for
> > > Tomcat and Commons-HttpClient and neither includes Ant -- they just
> have
> > > a BUILDING.txt that describes what you need to build it.
> > >
> > >    I agree that we should consider Maven for 1.10 -- a couple months ago
> > > I played with building JMeter with Maven, and it seemed to work pretty
> > > well, especially since we're building multiple jar files.
> > >
> > > Jeremy
> > >
> > >
> > > Jordi Salvat i Alabart wrote:
> > >
> > >> Hi Mike. Hi everyone.
> > >>
> > >> "./build.sh src_dist" has some problems (in addition to the one I
> > >> described in my previous message). Enumerating them here for
> discussion:
> > >>
> > >> - The src distribution needs to be unpacked on top of the binary
> > >> distribution -- otherwise the libraries in lib/ will be missing. Do you
> > >> think they should be in both packages?
> > >>
> > >> - log4j.conf is missing form bin/. This doesn't seem to break any tests
> > >> -- are they needed?
> > >>
> > >> - bin/testfiles/ is missing -- this is required for tests to work.
> > >>
> > >> - lib/ant-...jar and lib/ant-...-optional.jar are missing. They
> > >> should be in the source package.
> > >>
> > >> - build.sh lacks the necessary execute permissions. I should add these.
> > >>
> > >> - There's no way to build binary and source distributions in a single
> > >> shot -- you need to build one, extract the results, then build the
> > >> other. I should fix this.
> > >>
> > >> I can fix all these without need to re-issue the binary distribution.
> > >> Or we can generate a source distribution by hand (by packing up the
> > >> checked-out CVS content) and sort these out later: for 1.9.1 if Mike
> > >> decides to issue it for the Japanese stuff, or for 1.10. It's also
> > >> possible that we need to issue a 1.9.1 anyway to sort out the unit
> > >> test problem from my previous e-mail. Your opinion welcome.
> > >>
> > >> For 1.10, we could try to move to Maven for building. Seems to be
> > >> pretty much the standard chez Jakarta. Anyone has experience with it?
> > >>
> > >
> > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> > > For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> > >
> > >
> >
> > -- 
> > Salut,
> >
> > Jordi.
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> >
> 
> 
> 
> 
> --
> Michael Stover
> mstover1@apache.org
> Yahoo IM: mstover_ya
> ICQ: 152975688
> AIM: mstover777
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 




--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: Source dist build

Posted by ms...@apache.org.
I think I will stay out of this discussion.  I just like things that work.  One thing, though - I think 
it's important to keep in mind that JMeter differs from most Jakarta projects in that it is a client 
side appliction, and not a developer's library, and not a server app.  It's primary target user is 
neither a developer nor a server admin, but just a user.  The binary distribution of JMeter must 
include everything required to run JMeter, IMO.  It's also useful that nightly tarballs can be 
compiled and installed without any trouble for users (via the "build.bat/build.sh" script).  
Though, it's probably a good idea to ensure that build.bat/build.sh is in no way required to do 
the build.

Beyond that, Maven sounds interesting, reworking the build.xml file is probably a good idea.  
I'm pretty committed to the idea that JMeter 2.0 will be drastically different from JMeter 1.9.  
So, feel free to make big changes.

-Mike


On 10 Aug 2003 at 19:40, Sebastian Bazley wrote:

> I agree that src_dist should just include the source files, and not any external jars, and should ideally only include the xml docs,
> not the generated html ones.
> 
> Building JMeter would obviously require the run-time jars, but these would be needed for the 
binary distribution as well - I don't
> see any point in including them in both source and binary distributions.
> 
> It may be worth splitting the binary distribution into two:
> - the external jars
> - the rest of the existing binary distribution. [Perhaps remove the HTML files from docs, and 
keep just the printable_docs
> versions?]
> 
> As to including Ant in the distribution, I agree with Jeremy that it should not be included.
> It seems to me that developers are likely to have this installed anyway, and Ant is surely 
now stable enough that any recent version
> will build JMeter.
> 
> As far as I can tell, the only other external dependency is Anakia (Velocity), which is needed 
to create the documentation.
> This could perhaps be included in the source distribution, but I think it would be better just to 
leave up to developers to download
> it separately. As with Ant, they might already have it. The build.xml file can be updated to 
allow anakia to be picked up from lib/
> and/or ../jakarta-site2/.
> 
> Which reminds me: I would like to propose some enhancements to build.xml:
> 
> - allow the source/binary distributions to be created without rebuilding everything. This can 
be done by introducing two new targets
> which just do the tar/[g]zip etc. These targets could be marked "internal", i.e. no description.
> E.g. Make "dist" depend on "dist_tar", and move the tar/gzip/zip to dist_tar. Similarly for 
src_dist.
> [The original dist and src_dist targets would still do the same work, but refactored.]
> 
> - similarly, add a target (test_only ?) that can be used to test JMeter without needing to do a 
full build.
> At present, this means one cannot test a binary JMeter distribution.
> [One would need to include build.xml and bin/testfiles in binary distributions.]
> 
> - be able to use build.xml without needing build.sh or build.bat (e.g. use "ant [target]" or 
Eclipse)
> As it stands, build.bat does not agree with build.sh on where to look for libraries.
> Also, some of the classpath information is in build.xml and some is in build.xxx, which is not 
ideal.
> 
> This means updating some classpath definitions, and adding a classpath for Anakia. I've got 
this working on Windows XP.
> I've not yet had a chance to try this on Unix, but when I do, I can post a patch to Buzilla - 
unless there are any
> objections/further suggestions?
> 
> Sebastian
> ----- Original Message ----- 
> From: <ms...@apache.org>
> To: "JMeter Developers List" <jm...@jakarta.apache.org>
> Sent: Sunday, August 10, 2003 2:47 PM
> Subject: Re: Source dist build
> 
> 
> Thanks to both Jeremy and Jordi for finding and fixing these problems.  I also
> fixed the lack of jdom.jar in the release.  The src files are out there, however,
> wouldn't it be preferable for the src dist to be a mirror of the files as they
> appear in CVS?  As it is now, someone can download the src, and they're
> really only half done in terms of getting everything they need to compile
> JMeter.  Plus the fact that the versions of libs they choose to download might
> differ from the versions that 1.9 uses, that seems like a potential problem.
> 
> I'm thinking src_dist should simply tar up all the cvs files as is and be done.
> What do you all think?
> 
> -Mike
> 
> On 10 Aug 2003 at 2:49, Jordi Salvat i Alabart wrote:
> 
> > Thanks Jeremy. It worked after initializing the variables.
> >
> > Unfortunately, solving this implies changes to the binary dist (although
> > only in unit test code).
> >
> > Mike: I'll check in the change -- for you to decide which release to
> > include it in.
> >
> > -- 
> > Salut,
> >
> > Jordi.
> >
> > Jeremy Arnold wrote:
> > > Jordi,
> > >    I took a quick look at the test failure log.  I suspect that you are
> > > correct that the problem is that the tests are executed in a different
> > > order.  PackageTest apparently assumes that the variables are already
> > > set.  I haven't tried to check what order the tests are executed in with
> > > the binary distribution, but I see that there are at least a couple of
> > > tests (org.apache.jmeter.engine.util.ValueReplacer.Test,
> > > org.apache.jmeter.extractor.RegexExtracter.Test) which initialize the
> > > variables, and a couple other places in the JMeter code that initialize
> > > them.  The best way to fix this is probably for PackageTest to
> > > initialize the variables itself.  That way the tests can run in any
> > > order without problems.
> > >
> > >    Regarding the missing lib/ant*.jar and other missing libraries:
> > > isn't part of the point of having a source distribution that it doesn't
> > > have all the extra binaries, so you have to either already have them or
> > > download them separately?  I just checked the source distributions for
> > > Tomcat and Commons-HttpClient and neither includes Ant -- they just
> have
> > > a BUILDING.txt that describes what you need to build it.
> > >
> > >    I agree that we should consider Maven for 1.10 -- a couple months ago
> > > I played with building JMeter with Maven, and it seemed to work pretty
> > > well, especially since we're building multiple jar files.
> > >
> > > Jeremy
> > >
> > >
> > > Jordi Salvat i Alabart wrote:
> > >
> > >> Hi Mike. Hi everyone.
> > >>
> > >> "./build.sh src_dist" has some problems (in addition to the one I
> > >> described in my previous message). Enumerating them here for
> discussion:
> > >>
> > >> - The src distribution needs to be unpacked on top of the binary
> > >> distribution -- otherwise the libraries in lib/ will be missing. Do you
> > >> think they should be in both packages?
> > >>
> > >> - log4j.conf is missing form bin/. This doesn't seem to break any tests
> > >> -- are they needed?
> > >>
> > >> - bin/testfiles/ is missing -- this is required for tests to work.
> > >>
> > >> - lib/ant-...jar and lib/ant-...-optional.jar are missing. They
> > >> should be in the source package.
> > >>
> > >> - build.sh lacks the necessary execute permissions. I should add these.
> > >>
> > >> - There's no way to build binary and source distributions in a single
> > >> shot -- you need to build one, extract the results, then build the
> > >> other. I should fix this.
> > >>
> > >> I can fix all these without need to re-issue the binary distribution.
> > >> Or we can generate a source distribution by hand (by packing up the
> > >> checked-out CVS content) and sort these out later: for 1.9.1 if Mike
> > >> decides to issue it for the Japanese stuff, or for 1.10. It's also
> > >> possible that we need to issue a 1.9.1 anyway to sort out the unit
> > >> test problem from my previous e-mail. Your opinion welcome.
> > >>
> > >> For 1.10, we could try to move to Maven for building. Seems to be
> > >> pretty much the standard chez Jakarta. Anyone has experience with it?
> > >>
> > >
> > >
> > >
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> > > For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> > >
> > >
> >
> > -- 
> > Salut,
> >
> > Jordi.
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> >
> 
> 
> 
> 
> --
> Michael Stover
> mstover1@apache.org
> Yahoo IM: mstover_ya
> ICQ: 152975688
> AIM: mstover777
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 




--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

Re: Source dist build

Posted by Sebastian Bazley <Se...@london.sema.slb.com>.
I agree that src_dist should just include the source files, and not any external jars, and should ideally only include the xml docs,
not the generated html ones.

Building JMeter would obviously require the run-time jars, but these would be needed for the binary distribution as well - I don't
see any point in including them in both source and binary distributions.

It may be worth splitting the binary distribution into two:
- the external jars
- the rest of the existing binary distribution. [Perhaps remove the HTML files from docs, and keep just the printable_docs
versions?]

As to including Ant in the distribution, I agree with Jeremy that it should not be included.
It seems to me that developers are likely to have this installed anyway, and Ant is surely now stable enough that any recent version
will build JMeter.

As far as I can tell, the only other external dependency is Anakia (Velocity), which is needed to create the documentation.
This could perhaps be included in the source distribution, but I think it would be better just to leave up to developers to download
it separately. As with Ant, they might already have it. The build.xml file can be updated to allow anakia to be picked up from lib/
and/or ../jakarta-site2/.

Which reminds me: I would like to propose some enhancements to build.xml:

- allow the source/binary distributions to be created without rebuilding everything. This can be done by introducing two new targets
which just do the tar/[g]zip etc. These targets could be marked "internal", i.e. no description.
E.g. Make "dist" depend on "dist_tar", and move the tar/gzip/zip to dist_tar. Similarly for src_dist.
[The original dist and src_dist targets would still do the same work, but refactored.]

- similarly, add a target (test_only ?) that can be used to test JMeter without needing to do a full build.
At present, this means one cannot test a binary JMeter distribution.
[One would need to include build.xml and bin/testfiles in binary distributions.]

- be able to use build.xml without needing build.sh or build.bat (e.g. use "ant [target]" or Eclipse)
As it stands, build.bat does not agree with build.sh on where to look for libraries.
Also, some of the classpath information is in build.xml and some is in build.xxx, which is not ideal.

This means updating some classpath definitions, and adding a classpath for Anakia. I've got this working on Windows XP.
I've not yet had a chance to try this on Unix, but when I do, I can post a patch to Buzilla - unless there are any
objections/further suggestions?

Sebastian
----- Original Message ----- 
From: <ms...@apache.org>
To: "JMeter Developers List" <jm...@jakarta.apache.org>
Sent: Sunday, August 10, 2003 2:47 PM
Subject: Re: Source dist build


Thanks to both Jeremy and Jordi for finding and fixing these problems.  I also
fixed the lack of jdom.jar in the release.  The src files are out there, however,
wouldn't it be preferable for the src dist to be a mirror of the files as they
appear in CVS?  As it is now, someone can download the src, and they're
really only half done in terms of getting everything they need to compile
JMeter.  Plus the fact that the versions of libs they choose to download might
differ from the versions that 1.9 uses, that seems like a potential problem.

I'm thinking src_dist should simply tar up all the cvs files as is and be done.
What do you all think?

-Mike

On 10 Aug 2003 at 2:49, Jordi Salvat i Alabart wrote:

> Thanks Jeremy. It worked after initializing the variables.
>
> Unfortunately, solving this implies changes to the binary dist (although
> only in unit test code).
>
> Mike: I'll check in the change -- for you to decide which release to
> include it in.
>
> -- 
> Salut,
>
> Jordi.
>
> Jeremy Arnold wrote:
> > Jordi,
> >    I took a quick look at the test failure log.  I suspect that you are
> > correct that the problem is that the tests are executed in a different
> > order.  PackageTest apparently assumes that the variables are already
> > set.  I haven't tried to check what order the tests are executed in with
> > the binary distribution, but I see that there are at least a couple of
> > tests (org.apache.jmeter.engine.util.ValueReplacer.Test,
> > org.apache.jmeter.extractor.RegexExtracter.Test) which initialize the
> > variables, and a couple other places in the JMeter code that initialize
> > them.  The best way to fix this is probably for PackageTest to
> > initialize the variables itself.  That way the tests can run in any
> > order without problems.
> >
> >    Regarding the missing lib/ant*.jar and other missing libraries:
> > isn't part of the point of having a source distribution that it doesn't
> > have all the extra binaries, so you have to either already have them or
> > download them separately?  I just checked the source distributions for
> > Tomcat and Commons-HttpClient and neither includes Ant -- they just
have
> > a BUILDING.txt that describes what you need to build it.
> >
> >    I agree that we should consider Maven for 1.10 -- a couple months ago
> > I played with building JMeter with Maven, and it seemed to work pretty
> > well, especially since we're building multiple jar files.
> >
> > Jeremy
> >
> >
> > Jordi Salvat i Alabart wrote:
> >
> >> Hi Mike. Hi everyone.
> >>
> >> "./build.sh src_dist" has some problems (in addition to the one I
> >> described in my previous message). Enumerating them here for
discussion:
> >>
> >> - The src distribution needs to be unpacked on top of the binary
> >> distribution -- otherwise the libraries in lib/ will be missing. Do you
> >> think they should be in both packages?
> >>
> >> - log4j.conf is missing form bin/. This doesn't seem to break any tests
> >> -- are they needed?
> >>
> >> - bin/testfiles/ is missing -- this is required for tests to work.
> >>
> >> - lib/ant-...jar and lib/ant-...-optional.jar are missing. They
> >> should be in the source package.
> >>
> >> - build.sh lacks the necessary execute permissions. I should add these.
> >>
> >> - There's no way to build binary and source distributions in a single
> >> shot -- you need to build one, extract the results, then build the
> >> other. I should fix this.
> >>
> >> I can fix all these without need to re-issue the binary distribution.
> >> Or we can generate a source distribution by hand (by packing up the
> >> checked-out CVS content) and sort these out later: for 1.9.1 if Mike
> >> decides to issue it for the Japanese stuff, or for 1.10. It's also
> >> possible that we need to issue a 1.9.1 anyway to sort out the unit
> >> test problem from my previous e-mail. Your opinion welcome.
> >>
> >> For 1.10, we could try to move to Maven for building. Seems to be
> >> pretty much the standard chez Jakarta. Anyone has experience with it?
> >>
> >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> >
> >
>
> -- 
> Salut,
>
> Jordi.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
>




--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: Source dist build

Posted by Jordi Salvat i Alabart <js...@atg.com>.

mstover1@apache.org wrote:
> I'm thinking src_dist should simply tar up all the cvs files as is and be done.  
> What do you all think?

Pros: can build immediately after downloading. Is easy to create and 
maintain the ant rules.
Cons: is bulky.

I think the pros outweight the cons.

-- 
Salut,

Jordi.


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: Source dist build

Posted by Jeremy Arnold <je...@bigfoot.com>.
Mike,
    Jordi did all the work -- I just spent 5 minutes having Eclipse look 
for where methods were called from.

    Getting a dump of CVS sounds like a reasonable plan for this 
release.  But this might be something to discuss for the next release.  
Moving to Maven might make it a moot point anyway, since it will just 
find the libraries it needs (whether on the local system or on the 
network).  If we don't move to Maven, we'll have to talk about whether 
it is better to include all of the libraries or to just write a 
BUILDING.TXT file which lists the versions of the libraries we've tested 
with and their download locations.

Jeremy

mstover1@apache.org wrote:

>Thanks to both Jeremy and Jordi for finding and fixing these problems.  I also 
>fixed the lack of jdom.jar in the release.  The src files are out there, however, 
>wouldn't it be preferable for the src dist to be a mirror of the files as they 
>appear in CVS?  As it is now, someone can download the src, and they're 
>really only half done in terms of getting everything they need to compile 
>JMeter.  Plus the fact that the versions of libs they choose to download might 
>differ from the versions that 1.9 uses, that seems like a potential problem.
>
>I'm thinking src_dist should simply tar up all the cvs files as is and be done.  
>What do you all think?
>
>-Mike
>  
>



Re: Source dist build

Posted by Jordi Salvat i Alabart <js...@atg.com>.

mstover1@apache.org wrote:
> I'm thinking src_dist should simply tar up all the cvs files as is and be done.  
> What do you all think?

Pros: can build immediately after downloading. Is easy to create and 
maintain the ant rules.
Cons: is bulky.

I think the pros outweight the cons.

-- 
Salut,

Jordi.


Re: Source dist build

Posted by Sebastian Bazley <Se...@london.sema.slb.com>.
I agree that src_dist should just include the source files, and not any external jars, and should ideally only include the xml docs,
not the generated html ones.

Building JMeter would obviously require the run-time jars, but these would be needed for the binary distribution as well - I don't
see any point in including them in both source and binary distributions.

It may be worth splitting the binary distribution into two:
- the external jars
- the rest of the existing binary distribution. [Perhaps remove the HTML files from docs, and keep just the printable_docs
versions?]

As to including Ant in the distribution, I agree with Jeremy that it should not be included.
It seems to me that developers are likely to have this installed anyway, and Ant is surely now stable enough that any recent version
will build JMeter.

As far as I can tell, the only other external dependency is Anakia (Velocity), which is needed to create the documentation.
This could perhaps be included in the source distribution, but I think it would be better just to leave up to developers to download
it separately. As with Ant, they might already have it. The build.xml file can be updated to allow anakia to be picked up from lib/
and/or ../jakarta-site2/.

Which reminds me: I would like to propose some enhancements to build.xml:

- allow the source/binary distributions to be created without rebuilding everything. This can be done by introducing two new targets
which just do the tar/[g]zip etc. These targets could be marked "internal", i.e. no description.
E.g. Make "dist" depend on "dist_tar", and move the tar/gzip/zip to dist_tar. Similarly for src_dist.
[The original dist and src_dist targets would still do the same work, but refactored.]

- similarly, add a target (test_only ?) that can be used to test JMeter without needing to do a full build.
At present, this means one cannot test a binary JMeter distribution.
[One would need to include build.xml and bin/testfiles in binary distributions.]

- be able to use build.xml without needing build.sh or build.bat (e.g. use "ant [target]" or Eclipse)
As it stands, build.bat does not agree with build.sh on where to look for libraries.
Also, some of the classpath information is in build.xml and some is in build.xxx, which is not ideal.

This means updating some classpath definitions, and adding a classpath for Anakia. I've got this working on Windows XP.
I've not yet had a chance to try this on Unix, but when I do, I can post a patch to Buzilla - unless there are any
objections/further suggestions?

Sebastian
----- Original Message ----- 
From: <ms...@apache.org>
To: "JMeter Developers List" <jm...@jakarta.apache.org>
Sent: Sunday, August 10, 2003 2:47 PM
Subject: Re: Source dist build


Thanks to both Jeremy and Jordi for finding and fixing these problems.  I also
fixed the lack of jdom.jar in the release.  The src files are out there, however,
wouldn't it be preferable for the src dist to be a mirror of the files as they
appear in CVS?  As it is now, someone can download the src, and they're
really only half done in terms of getting everything they need to compile
JMeter.  Plus the fact that the versions of libs they choose to download might
differ from the versions that 1.9 uses, that seems like a potential problem.

I'm thinking src_dist should simply tar up all the cvs files as is and be done.
What do you all think?

-Mike

On 10 Aug 2003 at 2:49, Jordi Salvat i Alabart wrote:

> Thanks Jeremy. It worked after initializing the variables.
>
> Unfortunately, solving this implies changes to the binary dist (although
> only in unit test code).
>
> Mike: I'll check in the change -- for you to decide which release to
> include it in.
>
> -- 
> Salut,
>
> Jordi.
>
> Jeremy Arnold wrote:
> > Jordi,
> >    I took a quick look at the test failure log.  I suspect that you are
> > correct that the problem is that the tests are executed in a different
> > order.  PackageTest apparently assumes that the variables are already
> > set.  I haven't tried to check what order the tests are executed in with
> > the binary distribution, but I see that there are at least a couple of
> > tests (org.apache.jmeter.engine.util.ValueReplacer.Test,
> > org.apache.jmeter.extractor.RegexExtracter.Test) which initialize the
> > variables, and a couple other places in the JMeter code that initialize
> > them.  The best way to fix this is probably for PackageTest to
> > initialize the variables itself.  That way the tests can run in any
> > order without problems.
> >
> >    Regarding the missing lib/ant*.jar and other missing libraries:
> > isn't part of the point of having a source distribution that it doesn't
> > have all the extra binaries, so you have to either already have them or
> > download them separately?  I just checked the source distributions for
> > Tomcat and Commons-HttpClient and neither includes Ant -- they just
have
> > a BUILDING.txt that describes what you need to build it.
> >
> >    I agree that we should consider Maven for 1.10 -- a couple months ago
> > I played with building JMeter with Maven, and it seemed to work pretty
> > well, especially since we're building multiple jar files.
> >
> > Jeremy
> >
> >
> > Jordi Salvat i Alabart wrote:
> >
> >> Hi Mike. Hi everyone.
> >>
> >> "./build.sh src_dist" has some problems (in addition to the one I
> >> described in my previous message). Enumerating them here for
discussion:
> >>
> >> - The src distribution needs to be unpacked on top of the binary
> >> distribution -- otherwise the libraries in lib/ will be missing. Do you
> >> think they should be in both packages?
> >>
> >> - log4j.conf is missing form bin/. This doesn't seem to break any tests
> >> -- are they needed?
> >>
> >> - bin/testfiles/ is missing -- this is required for tests to work.
> >>
> >> - lib/ant-...jar and lib/ant-...-optional.jar are missing. They
> >> should be in the source package.
> >>
> >> - build.sh lacks the necessary execute permissions. I should add these.
> >>
> >> - There's no way to build binary and source distributions in a single
> >> shot -- you need to build one, extract the results, then build the
> >> other. I should fix this.
> >>
> >> I can fix all these without need to re-issue the binary distribution.
> >> Or we can generate a source distribution by hand (by packing up the
> >> checked-out CVS content) and sort these out later: for 1.9.1 if Mike
> >> decides to issue it for the Japanese stuff, or for 1.10. It's also
> >> possible that we need to issue a 1.9.1 anyway to sort out the unit
> >> test problem from my previous e-mail. Your opinion welcome.
> >>
> >> For 1.10, we could try to move to Maven for building. Seems to be
> >> pretty much the standard chez Jakarta. Anyone has experience with it?
> >>
> >
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> >
> >
>
> -- 
> Salut,
>
> Jordi.
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
>




--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org



Re: Source dist build

Posted by ms...@apache.org.
Thanks to both Jeremy and Jordi for finding and fixing these problems.  I also 
fixed the lack of jdom.jar in the release.  The src files are out there, however, 
wouldn't it be preferable for the src dist to be a mirror of the files as they 
appear in CVS?  As it is now, someone can download the src, and they're 
really only half done in terms of getting everything they need to compile 
JMeter.  Plus the fact that the versions of libs they choose to download might 
differ from the versions that 1.9 uses, that seems like a potential problem.

I'm thinking src_dist should simply tar up all the cvs files as is and be done.  
What do you all think?

-Mike

On 10 Aug 2003 at 2:49, Jordi Salvat i Alabart wrote:

> Thanks Jeremy. It worked after initializing the variables.
> 
> Unfortunately, solving this implies changes to the binary dist (although 
> only in unit test code).
> 
> Mike: I'll check in the change -- for you to decide which release to 
> include it in.
> 
> -- 
> Salut,
> 
> Jordi.
> 
> Jeremy Arnold wrote:
> > Jordi,
> >    I took a quick look at the test failure log.  I suspect that you are 
> > correct that the problem is that the tests are executed in a different 
> > order.  PackageTest apparently assumes that the variables are already 
> > set.  I haven't tried to check what order the tests are executed in with 
> > the binary distribution, but I see that there are at least a couple of 
> > tests (org.apache.jmeter.engine.util.ValueReplacer.Test, 
> > org.apache.jmeter.extractor.RegexExtracter.Test) which initialize the 
> > variables, and a couple other places in the JMeter code that initialize 
> > them.  The best way to fix this is probably for PackageTest to 
> > initialize the variables itself.  That way the tests can run in any 
> > order without problems.
> > 
> >    Regarding the missing lib/ant*.jar and other missing libraries:  
> > isn't part of the point of having a source distribution that it doesn't 
> > have all the extra binaries, so you have to either already have them or 
> > download them separately?  I just checked the source distributions for 
> > Tomcat and Commons-HttpClient and neither includes Ant -- they just 
have 
> > a BUILDING.txt that describes what you need to build it.
> > 
> >    I agree that we should consider Maven for 1.10 -- a couple months ago 
> > I played with building JMeter with Maven, and it seemed to work pretty 
> > well, especially since we're building multiple jar files.
> > 
> > Jeremy
> > 
> > 
> > Jordi Salvat i Alabart wrote:
> > 
> >> Hi Mike. Hi everyone.
> >>
> >> "./build.sh src_dist" has some problems (in addition to the one I 
> >> described in my previous message). Enumerating them here for 
discussion:
> >>
> >> - The src distribution needs to be unpacked on top of the binary
> >> distribution -- otherwise the libraries in lib/ will be missing. Do you
> >> think they should be in both packages?
> >>
> >> - log4j.conf is missing form bin/. This doesn't seem to break any tests
> >> -- are they needed?
> >>
> >> - bin/testfiles/ is missing -- this is required for tests to work.
> >>
> >> - lib/ant-...jar and lib/ant-...-optional.jar are missing. They
> >> should be in the source package.
> >>
> >> - build.sh lacks the necessary execute permissions. I should add these.
> >>
> >> - There's no way to build binary and source distributions in a single
> >> shot -- you need to build one, extract the results, then build the
> >> other. I should fix this.
> >>
> >> I can fix all these without need to re-issue the binary distribution. 
> >> Or we can generate a source distribution by hand (by packing up the 
> >> checked-out CVS content) and sort these out later: for 1.9.1 if Mike 
> >> decides to issue it for the Japanese stuff, or for 1.10. It's also 
> >> possible that we need to issue a 1.9.1 anyway to sort out the unit 
> >> test problem from my previous e-mail. Your opinion welcome.
> >>
> >> For 1.10, we could try to move to Maven for building. Seems to be 
> >> pretty much the standard chez Jakarta. Anyone has experience with it?
> >>
> > 
> > 
> > 
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> > 
> > 
> 
> -- 
> Salut,
> 
> Jordi.
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 




--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: Source dist build

Posted by ms...@apache.org.
Thanks to both Jeremy and Jordi for finding and fixing these problems.  I also 
fixed the lack of jdom.jar in the release.  The src files are out there, however, 
wouldn't it be preferable for the src dist to be a mirror of the files as they 
appear in CVS?  As it is now, someone can download the src, and they're 
really only half done in terms of getting everything they need to compile 
JMeter.  Plus the fact that the versions of libs they choose to download might 
differ from the versions that 1.9 uses, that seems like a potential problem.

I'm thinking src_dist should simply tar up all the cvs files as is and be done.  
What do you all think?

-Mike

On 10 Aug 2003 at 2:49, Jordi Salvat i Alabart wrote:

> Thanks Jeremy. It worked after initializing the variables.
> 
> Unfortunately, solving this implies changes to the binary dist (although 
> only in unit test code).
> 
> Mike: I'll check in the change -- for you to decide which release to 
> include it in.
> 
> -- 
> Salut,
> 
> Jordi.
> 
> Jeremy Arnold wrote:
> > Jordi,
> >    I took a quick look at the test failure log.  I suspect that you are 
> > correct that the problem is that the tests are executed in a different 
> > order.  PackageTest apparently assumes that the variables are already 
> > set.  I haven't tried to check what order the tests are executed in with 
> > the binary distribution, but I see that there are at least a couple of 
> > tests (org.apache.jmeter.engine.util.ValueReplacer.Test, 
> > org.apache.jmeter.extractor.RegexExtracter.Test) which initialize the 
> > variables, and a couple other places in the JMeter code that initialize 
> > them.  The best way to fix this is probably for PackageTest to 
> > initialize the variables itself.  That way the tests can run in any 
> > order without problems.
> > 
> >    Regarding the missing lib/ant*.jar and other missing libraries:  
> > isn't part of the point of having a source distribution that it doesn't 
> > have all the extra binaries, so you have to either already have them or 
> > download them separately?  I just checked the source distributions for 
> > Tomcat and Commons-HttpClient and neither includes Ant -- they just 
have 
> > a BUILDING.txt that describes what you need to build it.
> > 
> >    I agree that we should consider Maven for 1.10 -- a couple months ago 
> > I played with building JMeter with Maven, and it seemed to work pretty 
> > well, especially since we're building multiple jar files.
> > 
> > Jeremy
> > 
> > 
> > Jordi Salvat i Alabart wrote:
> > 
> >> Hi Mike. Hi everyone.
> >>
> >> "./build.sh src_dist" has some problems (in addition to the one I 
> >> described in my previous message). Enumerating them here for 
discussion:
> >>
> >> - The src distribution needs to be unpacked on top of the binary
> >> distribution -- otherwise the libraries in lib/ will be missing. Do you
> >> think they should be in both packages?
> >>
> >> - log4j.conf is missing form bin/. This doesn't seem to break any tests
> >> -- are they needed?
> >>
> >> - bin/testfiles/ is missing -- this is required for tests to work.
> >>
> >> - lib/ant-...jar and lib/ant-...-optional.jar are missing. They
> >> should be in the source package.
> >>
> >> - build.sh lacks the necessary execute permissions. I should add these.
> >>
> >> - There's no way to build binary and source distributions in a single
> >> shot -- you need to build one, extract the results, then build the
> >> other. I should fix this.
> >>
> >> I can fix all these without need to re-issue the binary distribution. 
> >> Or we can generate a source distribution by hand (by packing up the 
> >> checked-out CVS content) and sort these out later: for 1.9.1 if Mike 
> >> decides to issue it for the Japanese stuff, or for 1.10. It's also 
> >> possible that we need to issue a 1.9.1 anyway to sort out the unit 
> >> test problem from my previous e-mail. Your opinion welcome.
> >>
> >> For 1.10, we could try to move to Maven for building. Seems to be 
> >> pretty much the standard chez Jakarta. Anyone has experience with it?
> >>
> > 
> > 
> > 
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> > 
> > 
> 
> -- 
> Salut,
> 
> Jordi.
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 




--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

Re: Source dist build

Posted by Jordi Salvat i Alabart <js...@atg.com>.
Thanks Jeremy. It worked after initializing the variables.

Unfortunately, solving this implies changes to the binary dist (although 
only in unit test code).

Mike: I'll check in the change -- for you to decide which release to 
include it in.

-- 
Salut,

Jordi.

Jeremy Arnold wrote:
> Jordi,
>    I took a quick look at the test failure log.  I suspect that you are 
> correct that the problem is that the tests are executed in a different 
> order.  PackageTest apparently assumes that the variables are already 
> set.  I haven't tried to check what order the tests are executed in with 
> the binary distribution, but I see that there are at least a couple of 
> tests (org.apache.jmeter.engine.util.ValueReplacer.Test, 
> org.apache.jmeter.extractor.RegexExtracter.Test) which initialize the 
> variables, and a couple other places in the JMeter code that initialize 
> them.  The best way to fix this is probably for PackageTest to 
> initialize the variables itself.  That way the tests can run in any 
> order without problems.
> 
>    Regarding the missing lib/ant*.jar and other missing libraries:  
> isn't part of the point of having a source distribution that it doesn't 
> have all the extra binaries, so you have to either already have them or 
> download them separately?  I just checked the source distributions for 
> Tomcat and Commons-HttpClient and neither includes Ant -- they just have 
> a BUILDING.txt that describes what you need to build it.
> 
>    I agree that we should consider Maven for 1.10 -- a couple months ago 
> I played with building JMeter with Maven, and it seemed to work pretty 
> well, especially since we're building multiple jar files.
> 
> Jeremy
> 
> 
> Jordi Salvat i Alabart wrote:
> 
>> Hi Mike. Hi everyone.
>>
>> "./build.sh src_dist" has some problems (in addition to the one I 
>> described in my previous message). Enumerating them here for discussion:
>>
>> - The src distribution needs to be unpacked on top of the binary
>> distribution -- otherwise the libraries in lib/ will be missing. Do you
>> think they should be in both packages?
>>
>> - log4j.conf is missing form bin/. This doesn't seem to break any tests
>> -- are they needed?
>>
>> - bin/testfiles/ is missing -- this is required for tests to work.
>>
>> - lib/ant-...jar and lib/ant-...-optional.jar are missing. They
>> should be in the source package.
>>
>> - build.sh lacks the necessary execute permissions. I should add these.
>>
>> - There's no way to build binary and source distributions in a single
>> shot -- you need to build one, extract the results, then build the
>> other. I should fix this.
>>
>> I can fix all these without need to re-issue the binary distribution. 
>> Or we can generate a source distribution by hand (by packing up the 
>> checked-out CVS content) and sort these out later: for 1.9.1 if Mike 
>> decides to issue it for the Japanese stuff, or for 1.10. It's also 
>> possible that we need to issue a 1.9.1 anyway to sort out the unit 
>> test problem from my previous e-mail. Your opinion welcome.
>>
>> For 1.10, we could try to move to Maven for building. Seems to be 
>> pretty much the standard chez Jakarta. Anyone has experience with it?
>>
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 
> 

-- 
Salut,

Jordi.


Re: Source dist build

Posted by Jordi Salvat i Alabart <js...@atg.com>.
Thanks Jeremy. It worked after initializing the variables.

Unfortunately, solving this implies changes to the binary dist (although 
only in unit test code).

Mike: I'll check in the change -- for you to decide which release to 
include it in.

-- 
Salut,

Jordi.

Jeremy Arnold wrote:
> Jordi,
>    I took a quick look at the test failure log.  I suspect that you are 
> correct that the problem is that the tests are executed in a different 
> order.  PackageTest apparently assumes that the variables are already 
> set.  I haven't tried to check what order the tests are executed in with 
> the binary distribution, but I see that there are at least a couple of 
> tests (org.apache.jmeter.engine.util.ValueReplacer.Test, 
> org.apache.jmeter.extractor.RegexExtracter.Test) which initialize the 
> variables, and a couple other places in the JMeter code that initialize 
> them.  The best way to fix this is probably for PackageTest to 
> initialize the variables itself.  That way the tests can run in any 
> order without problems.
> 
>    Regarding the missing lib/ant*.jar and other missing libraries:  
> isn't part of the point of having a source distribution that it doesn't 
> have all the extra binaries, so you have to either already have them or 
> download them separately?  I just checked the source distributions for 
> Tomcat and Commons-HttpClient and neither includes Ant -- they just have 
> a BUILDING.txt that describes what you need to build it.
> 
>    I agree that we should consider Maven for 1.10 -- a couple months ago 
> I played with building JMeter with Maven, and it seemed to work pretty 
> well, especially since we're building multiple jar files.
> 
> Jeremy
> 
> 
> Jordi Salvat i Alabart wrote:
> 
>> Hi Mike. Hi everyone.
>>
>> "./build.sh src_dist" has some problems (in addition to the one I 
>> described in my previous message). Enumerating them here for discussion:
>>
>> - The src distribution needs to be unpacked on top of the binary
>> distribution -- otherwise the libraries in lib/ will be missing. Do you
>> think they should be in both packages?
>>
>> - log4j.conf is missing form bin/. This doesn't seem to break any tests
>> -- are they needed?
>>
>> - bin/testfiles/ is missing -- this is required for tests to work.
>>
>> - lib/ant-...jar and lib/ant-...-optional.jar are missing. They
>> should be in the source package.
>>
>> - build.sh lacks the necessary execute permissions. I should add these.
>>
>> - There's no way to build binary and source distributions in a single
>> shot -- you need to build one, extract the results, then build the
>> other. I should fix this.
>>
>> I can fix all these without need to re-issue the binary distribution. 
>> Or we can generate a source distribution by hand (by packing up the 
>> checked-out CVS content) and sort these out later: for 1.9.1 if Mike 
>> decides to issue it for the Japanese stuff, or for 1.10. It's also 
>> possible that we need to issue a 1.9.1 anyway to sort out the unit 
>> test problem from my previous e-mail. Your opinion welcome.
>>
>> For 1.10, we could try to move to Maven for building. Seems to be 
>> pretty much the standard chez Jakarta. Anyone has experience with it?
>>
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 
> 

-- 
Salut,

Jordi.


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: Source dist build

Posted by Jeremy Arnold <je...@bigfoot.com>.
Jordi,
    I took a quick look at the test failure log.  I suspect that you are 
correct that the problem is that the tests are executed in a different 
order.  PackageTest apparently assumes that the variables are already 
set.  I haven't tried to check what order the tests are executed in with 
the binary distribution, but I see that there are at least a couple of 
tests (org.apache.jmeter.engine.util.ValueReplacer.Test, 
org.apache.jmeter.extractor.RegexExtracter.Test) which initialize the 
variables, and a couple other places in the JMeter code that initialize 
them.  The best way to fix this is probably for PackageTest to 
initialize the variables itself.  That way the tests can run in any 
order without problems.

    Regarding the missing lib/ant*.jar and other missing libraries:  
isn't part of the point of having a source distribution that it doesn't 
have all the extra binaries, so you have to either already have them or 
download them separately?  I just checked the source distributions for 
Tomcat and Commons-HttpClient and neither includes Ant -- they just have 
a BUILDING.txt that describes what you need to build it.

    I agree that we should consider Maven for 1.10 -- a couple months 
ago I played with building JMeter with Maven, and it seemed to work 
pretty well, especially since we're building multiple jar files.

Jeremy


Jordi Salvat i Alabart wrote:

> Hi Mike. Hi everyone.
>
> "./build.sh src_dist" has some problems (in addition to the one I 
> described in my previous message). Enumerating them here for discussion:
>
> - The src distribution needs to be unpacked on top of the binary
> distribution -- otherwise the libraries in lib/ will be missing. Do you
> think they should be in both packages?
>
> - log4j.conf is missing form bin/. This doesn't seem to break any tests
> -- are they needed?
>
> - bin/testfiles/ is missing -- this is required for tests to work.
>
> - lib/ant-...jar and lib/ant-...-optional.jar are missing. They
> should be in the source package.
>
> - build.sh lacks the necessary execute permissions. I should add these.
>
> - There's no way to build binary and source distributions in a single
> shot -- you need to build one, extract the results, then build the
> other. I should fix this.
>
> I can fix all these without need to re-issue the binary distribution. 
> Or we can generate a source distribution by hand (by packing up the 
> checked-out CVS content) and sort these out later: for 1.9.1 if Mike 
> decides to issue it for the Japanese stuff, or for 1.10. It's also 
> possible that we need to issue a 1.9.1 anyway to sort out the unit 
> test problem from my previous e-mail. Your opinion welcome.
>
> For 1.10, we could try to move to Maven for building. Seems to be 
> pretty much the standard chez Jakarta. Anyone has experience with it?
>



Re: Source dist build

Posted by Jeremy Arnold <je...@bigfoot.com>.
Jordi,
    I took a quick look at the test failure log.  I suspect that you are 
correct that the problem is that the tests are executed in a different 
order.  PackageTest apparently assumes that the variables are already 
set.  I haven't tried to check what order the tests are executed in with 
the binary distribution, but I see that there are at least a couple of 
tests (org.apache.jmeter.engine.util.ValueReplacer.Test, 
org.apache.jmeter.extractor.RegexExtracter.Test) which initialize the 
variables, and a couple other places in the JMeter code that initialize 
them.  The best way to fix this is probably for PackageTest to 
initialize the variables itself.  That way the tests can run in any 
order without problems.

    Regarding the missing lib/ant*.jar and other missing libraries:  
isn't part of the point of having a source distribution that it doesn't 
have all the extra binaries, so you have to either already have them or 
download them separately?  I just checked the source distributions for 
Tomcat and Commons-HttpClient and neither includes Ant -- they just have 
a BUILDING.txt that describes what you need to build it.

    I agree that we should consider Maven for 1.10 -- a couple months 
ago I played with building JMeter with Maven, and it seemed to work 
pretty well, especially since we're building multiple jar files.

Jeremy


Jordi Salvat i Alabart wrote:

> Hi Mike. Hi everyone.
>
> "./build.sh src_dist" has some problems (in addition to the one I 
> described in my previous message). Enumerating them here for discussion:
>
> - The src distribution needs to be unpacked on top of the binary
> distribution -- otherwise the libraries in lib/ will be missing. Do you
> think they should be in both packages?
>
> - log4j.conf is missing form bin/. This doesn't seem to break any tests
> -- are they needed?
>
> - bin/testfiles/ is missing -- this is required for tests to work.
>
> - lib/ant-...jar and lib/ant-...-optional.jar are missing. They
> should be in the source package.
>
> - build.sh lacks the necessary execute permissions. I should add these.
>
> - There's no way to build binary and source distributions in a single
> shot -- you need to build one, extract the results, then build the
> other. I should fix this.
>
> I can fix all these without need to re-issue the binary distribution. 
> Or we can generate a source distribution by hand (by packing up the 
> checked-out CVS content) and sort these out later: for 1.9.1 if Mike 
> decides to issue it for the Japanese stuff, or for 1.10. It's also 
> possible that we need to issue a 1.9.1 anyway to sort out the unit 
> test problem from my previous e-mail. Your opinion welcome.
>
> For 1.10, we could try to move to Maven for building. Seems to be 
> pretty much the standard chez Jakarta. Anyone has experience with it?
>



---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Source dist build

Posted by Jordi Salvat i Alabart <js...@atg.com>.
Hi Mike. Hi everyone.

"./build.sh src_dist" has some problems (in addition to the one I 
described in my previous message). Enumerating them here for discussion:

- The src distribution needs to be unpacked on top of the binary
distribution -- otherwise the libraries in lib/ will be missing. Do you
think they should be in both packages?

- log4j.conf is missing form bin/. This doesn't seem to break any tests
-- are they needed?

- bin/testfiles/ is missing -- this is required for tests to work.

- lib/ant-...jar and lib/ant-...-optional.jar are missing. They
should be in the source package.

- build.sh lacks the necessary execute permissions. I should add these.

- There's no way to build binary and source distributions in a single
shot -- you need to build one, extract the results, then build the
other. I should fix this.

I can fix all these without need to re-issue the binary distribution. Or 
we can generate a source distribution by hand (by packing up the 
checked-out CVS content) and sort these out later: for 1.9.1 if Mike 
decides to issue it for the Japanese stuff, or for 1.10. It's also 
possible that we need to issue a 1.9.1 anyway to sort out the unit test 
problem from my previous e-mail. Your opinion welcome.

For 1.10, we could try to move to Maven for building. Seems to be pretty 
much the standard chez Jakarta. Anyone has experience with it?

-- 
Salut,

Jordi.


Jordi Salvat i Alabart wrote:
> 
> 
> mstover1@apache.org wrote:
> 
>> I will make a source release in the next few days - the build file 
>> doesn't appear set up to make a source tar, 
> 
> 
> ant src_dist
> 
> should do it -- though it's long since I've not tested it. I'll try it now.
> 
>> so I have to write that.  In previous releases, source was included in 
>> all dists, but that made for a large download, so it was taken out.
>>
>> Also, the japanese translation is now in my hands, and I want to make 
>> that available, probably as a patch.
> 
> 
> 
>> -Mike
>>
>> On 8 Aug 2003 at 17:45, Tetsuya Kitahata wrote:
>>
>>
>>> http://jakarta.apache.org/site/news.html#20030807.1
>>>
>>> Congratulations!
>>>
>>> By the way, where can I find the source version of JMeter 1.9?
>>>
>>> -- Tetsuya (tetsuya@apache.org)
>>>
>>> On Thu, 07 Aug 2003 09:40:08 -0400
>>> (Subject: JMeter 1.9 released)
>>> mstover1@apache.org wrote:
>>>
>>>
>>>> The voting, while far from complete, was unanimous, and JMeter 1.9 
>>>> is released.  The links from jmeter's home pages 
>>>> (jakarta.apache.org/jmeter) have been updated to reflect this.  Enjoy!
>>>>
>>>> Now, let the development fun begin.
>>>>
>>>> I'll make a source release in the next few days as well and put it up.
>>>>
>>>> -- 
>>>> Michael Stover
>>>> mstover1@apache.org
>>>> Yahoo IM: mstover_ya
>>>> ICQ: 152975688
>>>> AIM: mstover777
>>>
>>>
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
>>> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
>>>
>>
>>
>>
>>
>>
>> -- 
>> Michael Stover
>> mstover1@apache.org
>> Yahoo IM: mstover_ya
>> ICQ: 152975688
>> AIM: mstover777
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
>> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
>>
>>
> 

-- 
Salut,

Jordi.




---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Source dist build

Posted by Jordi Salvat i Alabart <js...@atg.com>.
Hi Mike. Hi everyone.

"./build.sh src_dist" has some problems (in addition to the one I 
described in my previous message). Enumerating them here for discussion:

- The src distribution needs to be unpacked on top of the binary
distribution -- otherwise the libraries in lib/ will be missing. Do you
think they should be in both packages?

- log4j.conf is missing form bin/. This doesn't seem to break any tests
-- are they needed?

- bin/testfiles/ is missing -- this is required for tests to work.

- lib/ant-...jar and lib/ant-...-optional.jar are missing. They
should be in the source package.

- build.sh lacks the necessary execute permissions. I should add these.

- There's no way to build binary and source distributions in a single
shot -- you need to build one, extract the results, then build the
other. I should fix this.

I can fix all these without need to re-issue the binary distribution. Or 
we can generate a source distribution by hand (by packing up the 
checked-out CVS content) and sort these out later: for 1.9.1 if Mike 
decides to issue it for the Japanese stuff, or for 1.10. It's also 
possible that we need to issue a 1.9.1 anyway to sort out the unit test 
problem from my previous e-mail. Your opinion welcome.

For 1.10, we could try to move to Maven for building. Seems to be pretty 
much the standard chez Jakarta. Anyone has experience with it?

-- 
Salut,

Jordi.


Jordi Salvat i Alabart wrote:
> 
> 
> mstover1@apache.org wrote:
> 
>> I will make a source release in the next few days - the build file 
>> doesn't appear set up to make a source tar, 
> 
> 
> ant src_dist
> 
> should do it -- though it's long since I've not tested it. I'll try it now.
> 
>> so I have to write that.  In previous releases, source was included in 
>> all dists, but that made for a large download, so it was taken out.
>>
>> Also, the japanese translation is now in my hands, and I want to make 
>> that available, probably as a patch.
> 
> 
> 
>> -Mike
>>
>> On 8 Aug 2003 at 17:45, Tetsuya Kitahata wrote:
>>
>>
>>> http://jakarta.apache.org/site/news.html#20030807.1
>>>
>>> Congratulations!
>>>
>>> By the way, where can I find the source version of JMeter 1.9?
>>>
>>> -- Tetsuya (tetsuya@apache.org)
>>>
>>> On Thu, 07 Aug 2003 09:40:08 -0400
>>> (Subject: JMeter 1.9 released)
>>> mstover1@apache.org wrote:
>>>
>>>
>>>> The voting, while far from complete, was unanimous, and JMeter 1.9 
>>>> is released.  The links from jmeter's home pages 
>>>> (jakarta.apache.org/jmeter) have been updated to reflect this.  Enjoy!
>>>>
>>>> Now, let the development fun begin.
>>>>
>>>> I'll make a source release in the next few days as well and put it up.
>>>>
>>>> -- 
>>>> Michael Stover
>>>> mstover1@apache.org
>>>> Yahoo IM: mstover_ya
>>>> ICQ: 152975688
>>>> AIM: mstover777
>>>
>>>
>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
>>> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
>>>
>>
>>
>>
>>
>>
>> -- 
>> Michael Stover
>> mstover1@apache.org
>> Yahoo IM: mstover_ya
>> ICQ: 152975688
>> AIM: mstover777
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
>> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
>>
>>
> 

-- 
Salut,

Jordi.




Re: JMeter 1.9 released

Posted by Jordi Salvat i Alabart <js...@atg.com>.

mstover1@apache.org wrote:
> I will make a source release in the next few days - the build file doesn't appear 
> set up to make a source tar, 

ant src_dist

should do it -- though it's long since I've not tested it. I'll try it now.

> so I have to write that.  In previous releases, 
> source was included in all dists, but that made for a large download, so it was 
> taken out.
> 
> Also, the japanese translation is now in my hands, and I want to make that 
> available, probably as a patch.


> -Mike
> 
> On 8 Aug 2003 at 17:45, Tetsuya Kitahata wrote:
> 
> 
>>http://jakarta.apache.org/site/news.html#20030807.1
>>
>>Congratulations!
>>
>>By the way, where can I find the source version of JMeter 1.9?
>>
>>-- Tetsuya (tetsuya@apache.org)
>>
>>On Thu, 07 Aug 2003 09:40:08 -0400
>>(Subject: JMeter 1.9 released)
>>mstover1@apache.org wrote:
>>
>>
>>>The voting, while far from complete, was unanimous, and JMeter 1.9 is 
>>>released.  The links from jmeter's home pages (jakarta.apache.org/jmeter) 
>>>have been updated to reflect this.  Enjoy!
>>>
>>>Now, let the development fun begin.
>>>
>>>I'll make a source release in the next few days as well and put it up.
>>>
>>>--
>>>Michael Stover
>>>mstover1@apache.org
>>>Yahoo IM: mstover_ya
>>>ICQ: 152975688
>>>AIM: mstover777
>>
>>
>>
>>---------------------------------------------------------------------
>>To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
>>For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
>>
> 
> 
> 
> 
> 
> --
> Michael Stover
> mstover1@apache.org
> Yahoo IM: mstover_ya
> ICQ: 152975688
> AIM: mstover777
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 
> 

-- 
Salut,

Jordi.


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: Help with access control (.htaccess)

Posted by "J.D. Bronson" <li...@xpec.com>.
At 09:54 AM 12/20/2001, you wrote:
>"J.D. Bronson" wrote:
> > Ok...hmm...I changed the httpd.conf as follows:
> >
> > DocumentRoot "/var/www/users"
> > <Directory /var/www/users>
> > Order Deny,Allow
> > Deny from All
> > Allow from 192.168.100.0/255.255.255.0
> >      Options FollowSymLinks
> >      AllowOverride AuthConfig
> > </Directory>
> > <Directory /var/www/users/test>
> > Order Deny,Allow
> > Deny from all
> > Allow from 192.168.100.0/255.255.255.0
> >     Options FollowSymLinks
> >      AllowOverride AuthConfig
> > </Directory>
> >
> > ..since I agree with your advice....but this didnt help! :(
> >
> > I still (as a user) get HTTP 403 Forbidden and the apache error log reports
> > the exact same error.
> >
> > ...again I must be close and this has to be something stupid I 
> did/didnt do.
> >
> > Perhaps if I can setup httpd.conf with all the directives and information
> > and get rid of the .htaccess file?
> > (at least that might make troubleshooting easier?)
>
>There is no functional difference between putting the Auth directives in
>httpd.conf or in .htaccess - it should still work/not work as before.
>The differences are to do with maintenance:
>
>- if you use .htaccess, you don't need to restart the server to pick up
>changes.
>- if you use httpd.conf, you don't need to read .htaccess every time you
>visit a directory.
>
>you pays your money, you makes your choice...
>
>Getting back to your problem:
>
>- I assume you are trying to hit the site from a machine on the
>192.168.100.0 network.
>
>- do you really need to define the netmask? How about slacking off the
>restriction a bit. Just define the private subnet to begin with, e.g.
>
>Allow from 192.168
>
>- Exactly what file is giving the 403? Check in the error log to see
>what file is being "denied acces by server configuration". Is it hitting
>the path you expected?
>
>Things are working better than you think - most people can't get "Deny"
>to work at all! At least for you, the mechanism is working, you just
>have to slacken it off enough to allow your desired hits through.
>
>Rgds,
>
>Owen Boyle.

Well..I added the .htaccess file contents into the /dir area and it still 
didnt work. SOoooooo..

I looked at the httpd.conf file over and over and found yet ANOTHER entry 
for this specific directory with different directives. Obviously apache 
read this AFTER my correct directives. Grrrr...

So?

Now, if you are not on the IP range (connection forbidden) if you are on 
the IP range, you are prompted for a password.

While I have some more intense testing to do prior to going live, I am 
pleased that this seems to be working.

MANY THANX for the tips.






J.D. Bronson
Aurora Health Care
Information Services
"Death before downtime"


---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: Help with access control (.htaccess)

Posted by Owen Boyle <ob...@bourse.ch>.
"J.D. Bronson" wrote:
> Ok...hmm...I changed the httpd.conf as follows:
> 
> DocumentRoot "/var/www/users"
> <Directory /var/www/users>
> Order Deny,Allow
> Deny from All
> Allow from 192.168.100.0/255.255.255.0
>      Options FollowSymLinks
>      AllowOverride AuthConfig
> </Directory>
> <Directory /var/www/users/test>
> Order Deny,Allow
> Deny from all
> Allow from 192.168.100.0/255.255.255.0
>     Options FollowSymLinks
>      AllowOverride AuthConfig
> </Directory>
> 
> ..since I agree with your advice....but this didnt help! :(
> 
> I still (as a user) get HTTP 403 Forbidden and the apache error log reports
> the exact same error.
> 
> ...again I must be close and this has to be something stupid I did/didnt do.
> 
> Perhaps if I can setup httpd.conf with all the directives and information
> and get rid of the .htaccess file?
> (at least that might make troubleshooting easier?)

There is no functional difference between putting the Auth directives in
httpd.conf or in .htaccess - it should still work/not work as before.
The differences are to do with maintenance:

- if you use .htaccess, you don't need to restart the server to pick up
changes.
- if you use httpd.conf, you don't need to read .htaccess every time you
visit a directory.

you pays your money, you makes your choice...

Getting back to your problem:

- I assume you are trying to hit the site from a machine on the
192.168.100.0 network. 

- do you really need to define the netmask? How about slacking off the
restriction a bit. Just define the private subnet to begin with, e.g.

Allow from 192.168

- Exactly what file is giving the 403? Check in the error log to see
what file is being "denied acces by server configuration". Is it hitting
the path you expected?

Things are working better than you think - most people can't get "Deny"
to work at all! At least for you, the mechanism is working, you just
have to slacken it off enough to allow your desired hits through.

Rgds,

Owen Boyle.

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: Initial Planning Bundle

Posted by Sebastian Rahtz <se...@computing-services.oxford.ac.uk>.
Fotis Jannidis writes:
 > > Sebastian Rahtz <se...@computing-services.oxford.ac.uk> 
 > > fo:inline is not in the spec any more
 > 
 > I still don't understand. In the wd 27 Mar. 2000 there is a description of fo:inline in 
 > section 6.6.7. 

i lied. sorry, cant think what I was doing

sebastian


Re: FO fixes

Posted by Keiron Liddle <ke...@aftexsw.com>.

> > > I'd like to propose that we tag the main branch as of 16 July as FOP_0_14_0,
> > > and jar it up. Comments?
> >
> > +1 (if we have moved the remaining properties to the latest spec)
>
> +1 as well, lots of juicy stuff in here.

+1

> Ah, make sure we have examples that show all changes and enhancements
> like images, SVG, tables and all that nice stuff.
>
> Yeah, call me "promotion manager" :-)

I'll try to get all the svg documents and examples in order by then.
I want it to be alot cleaner than the current document.
When ready I will commit to docs/examples/svg



Re: FO fixes

Posted by Stefano Mazzocchi <st...@apache.org>.
Fotis Jannidis wrote:
> 
> > I'd like to propose that we tag the main branch as of 16 July as FOP_0_14_0,
> > and jar it up. Comments?
> 
> +1 (if we have moved the remaining properties to the latest spec)

+1 as well, lots of juicy stuff in here.

Ah, make sure we have examples that show all changes and enhancements
like images, SVG, tables and all that nice stuff.

Yeah, call me "promotion manager" :-)

-- 
Stefano Mazzocchi      One must still have chaos in oneself to be
                          able to give birth to a dancing star.
<st...@apache.org>                             Friedrich Nietzsche
--------------------------------------------------------------------
 Missed us in Orlando? Make it up with ApacheCON Europe in London!
------------------------- http://ApacheCon.Com ---------------------



Re: Asian fonts in pdfs

Posted by Matthew East <md...@ubuntu.com>.
Jeremias Maerki <dev <at> jeremias-maerki.ch> writes:

> 
> Your mail address screams that you're on Unix. So are you sure you want
> to change "fop.bat" (Windows batch) and not "fop" (unix shell script)?

Yes, that was it: I've now got Korean and Chinese PDFs! You guys rock!!

Thanks for all your help.

Matt


---------------------------------------------------------------------
To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org


Re: Asian fonts in pdfs

Posted by Jeremias Maerki <de...@jeremias-maerki.ch>.
Your mail address screams that you're on Unix. So are you sure you want
to change "fop.bat" (Windows batch) and not "fop" (unix shell script)?

On 09.05.2006 20:27:28 Matthew East wrote:
> Hi, and thanks again for your reply!
> 
> Jeremias Maerki <dev <at> jeremias-maerki.ch> writes:
> 
> > I guess you're now at the point where you need to increase the maximum
> > VM size: -Xmx 256M (or something like that) in the "fop" script for the
> > "java" command. TrueType fonts eat up a lot of memory.
> 
> I tried this, but it didn't work. I wasn't sure where I should add the string,
> so I tried fop.bat like this:
> 
> java %LOGCHOICE% %LOGLEVEL% -Xmx400m -cp "%LOCALCLASSPATH%"
> org.apache.fop.cli.Main %FOP_CMD_LINE_ARGS%
> 
> Did I do that wrongly?


Jeremias Maerki


---------------------------------------------------------------------
To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org


Re: Asian fonts in pdfs

Posted by Matthew East <md...@ubuntu.com>.
Hi, and thanks again for your reply!

Jeremias Maerki <dev <at> jeremias-maerki.ch> writes:

> I guess you're now at the point where you need to increase the maximum
> VM size: -Xmx 256M (or something like that) in the "fop" script for the
> "java" command. TrueType fonts eat up a lot of memory.

I tried this, but it didn't work. I wasn't sure where I should add the string,
so I tried fop.bat like this:

java %LOGCHOICE% %LOGLEVEL% -Xmx400m -cp "%LOCALCLASSPATH%"
org.apache.fop.cli.Main %FOP_CMD_LINE_ARGS%

Did I do that wrongly?

Matt


---------------------------------------------------------------------
To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org


Re: Asian fonts in pdfs

Posted by Jeremias Maerki <de...@jeremias-maerki.ch>.
I guess you're now at the point where you need to increase the maximum
VM size: -Xmx 256M (or something like that) in the "fop" script for the
"java" command. TrueType fonts eat up a lot of memory.

http://xmlgraphics.apache.org/fop/faq.html#OutOfMemoryException

On 09.05.2006 01:36:51 Matthew East wrote:
> On Mon, 8 May 2006 23:43:37 +0100
> Matthew East <ma...@mdke.org> wrote:
> 
> > Hi Lev,
> > 
> > On Sun, 7 May 2006 14:07:28 +0300
> > Lev T <gu...@gmail.com> wrote:
> > 
> > > If those documents use Unicode charsets, a possible solution is to
> > > embed ttf font contains needed charsets into PDF. I am using
> > > DFSG-free version of "kochi" font family for asian languages.
> > 
> > [...]
> > 
> > Thanks for this tip: I tried the kochi font family as you suggested,
> > and although everything seemed to go well, all I got were hashes
> > again. I've tried the following:
> > 
> > arphic-uming for zh_CN (fop crashed with an error about java memory)
> > kochi for zh_CN, ko, id
> 
> s/id/urd
> 
> So, I've made some progress, having found that a typo in my xsl was
> responsible for the hashes. Now I think that I just need to
> find the right font for each of the languages I am trying, and
> eliminate this crash, which seems to be happening when I get the right
> font (I'm using uming for zh_CN, which I have been assured works) ;)
> 
> http://pastebin.com/706537
> 
> Matt



Jeremias Maerki


---------------------------------------------------------------------
To unsubscribe, e-mail: fop-users-unsubscribe@xmlgraphics.apache.org
For additional commands, e-mail: fop-users-help@xmlgraphics.apache.org


Re: Building NetBeans 11.1 from source fails with compile errors

Posted by Matthias Bläsing <mb...@doppel-helix.eu>.
Hi,

Am Donnerstag, den 03.10.2019, 19:49 +0300 schrieb mlist:
> Are you sure the culprit is ant? As I mentioned previously using
> -Dpermit.jdk9.builds=true works. Doesn't that mean that ant is OK but
> something else needs fixing?
> 
> * Generally I am reluctant to downloading and installing software
> which
>   is not in the official distro repos as this makes long term
>   maintenance a nightmare. So I hope I won't need to do it

seriously, you expect me to figure out your problem, but then you
discuss possible solutions.

You dismiss possible solutions and don't even try them.

I'll stop putting time into this. The build is stable (demonstrated by
the CI pipeline), it works for me on Ubuntu and we have reports, that
switching from the distributions ant to the upstream ant also fixes the
build.

Greetings

Matthias


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by mlist <ml...@riseup.net>.
On Thu, 03 Oct 2019 14:53:51 +0200 Matthias Bläsing wrote:

> Please check which ant you are using to build. We have at least one
> report, where the distribution packed ant messed up the build:

The software versions I am using (from openSUSE's repos) are:

ant-1.9.10-lp150.2.3.1.noarch
java-11-openjdk-11.0.4.0-lp150.2.25.1.x86_64
java-11-openjdk-devel-11.0.4.0-lp150.2.25.1.x86_64
java-11-openjdk-headless-11.0.4.0-lp150.2.25.1.x86_64
java-1_8_0-openjdk-1.8.0.222-lp150.2.19.1.x86_64
java-1_8_0-openjdk-devel-1.8.0.222-lp150.2.19.1.x86_64
java-1_8_0-openjdk-headless-1.8.0.222-lp150.2.19.1.x86_64

> https://issues.apache.org/jira/browse/NETBEANS-239

That report says that ant 1.9.9 or newer is required, so that is
satisfied, right?

> https://www.mail-archive.com/dev@netbeans.incubator.apache.org/msg06452.html

This discussion is 19 months old and unfortunately I can't quite
understand what is the issue there.

> TL;DR: Download a fresh version of ant from apache, put it on the path
> and build with that.

Are you sure the culprit is ant? As I mentioned previously using
-Dpermit.jdk9.builds=true works. Doesn't that mean that ant is OK but
something else needs fixing?

* Generally I am reluctant to downloading and installing software which
  is not in the official distro repos as this makes long term
  maintenance a nightmare. So I hope I won't need to do it.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by Matthias Bläsing <mb...@doppel-helix.eu>.
Hi,

Am Freitag, den 04.10.2019, 19:22 +0300 schrieb mlist:
> Tried also ant-1.10.7 - exactly the same failed result.
> It also shows:
> 
> $ /tmp/download/apache-ant-1.10.7/ant-1.10.7/bin/ant -version
> Apache Ant(TM) version 1.9.10 compiled on August 16 2018
> 
> > So what should I do please?
> 

I experimented today with OpenSUSE. The netbeans build works correctly
with OpenJDK 11, but fails on 1.8. From my perspective the distribution
broke something in ant (build works correctly with Ubuntu ant and with
upstream ant, but fails on OpenSUSE).

My advise: Either raise an issue with OpenSUSE (as I found similar
reports years back, I doubt anyone cares) or just drop the distribution
package and use the upstream ant.

HTH

Matthias


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by mlist <ml...@riseup.net>.
Tried also ant-1.10.7 - exactly the same failed result.
It also shows:

$ /tmp/download/apache-ant-1.10.7/ant-1.10.7/bin/ant -version
Apache Ant(TM) version 1.9.10 compiled on August 16 2018

> So what should I do please?

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by mlist <ml...@riseup.net>.
On Thu, 03 Oct 2019 14:53:51 +0200 Matthias Bläsing wrote:

> TL;DR: Download a fresh version of ant from apache, put it on the path
> and build with that.

OK. I have done that:

1. I downloaded and built successfully ant-1.9.14 from source
2. In ~/.nbbuild.properties I have set as recommended:

nbjdk.home=/usr/lib64/jvm/java-1.8.0-openjdk-1.8.0/

I also used:

export JAVA_HOME=/usr/lib64/jvm/java-1.8.0-openjdk-1.8.0/

and I have set the newly built ant bin in path:

export PATH=/tmp/download/apache-ant-1.9.14/ant-1.9.14/bin:${PATH}

3. I ran 'ant -Dcluster.config=full'


Result:

BUILD FAILED

Here are the last lines of the output:

https://susepaste.org/e00b1e78

Something strange I noticed is that the build of ant-1.9.14 shows that
it is version 1.9.10:

$ /tmp/download/apache-ant-1.9.14/ant-1.9.14/bin/ant -version
Apache Ant(TM) version 1.9.10 compiled on August 16 2018

IOW: the same version which comes with openSUSE:

$ /usr/bin/ant -version
Apache Ant(TM) version 1.9.10 compiled on August 16 2018

FWIW the only difference between the two is 3 lines:

diff /tmp/download/apache-ant-1.9.14/ant-1.9.14/bin/ant /usr/bin/ant
288a289,291
> if test -n "$SOURCE_DATE_EPOCH" ; then
>   ANT_OPTS="$ANT_OPTS -Dant.tstamp.now=$SOURCE_DATE_EPOCH"
> fi


So what should I do please?

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: Building NetBeans 11.1 from source fails with compile errors

Posted by Matthias Bläsing <mb...@doppel-helix.eu>.
Am Donnerstag, den 03.10.2019, 15:26 +0300 schrieb mlist:
> On Wed, 2 Oct 2019 23:06:58 +0300 mlist wrote:
> 
> > Hi,
> > 
> > I am trying to build NetBeans 11.1 from source code on openSUSE
> > Leap
> > 15. I have read the instructions and the README.md but
> > unfortunately I
> > am getting errors which I don't know how to fix as I am not a Java
> > developer (my intention is to use NetBeans IDE for PHP, CSS,
> > JavaScript).
> > 
> > Here is the output I am getting:
> > 
> > https://susepaste.org/673abb68
> > 
> > What should I do to make this work please?
> 
> Can anyone please help?
> 

Please check which ant you are using to build. We have at least one
report, where the distribution packed ant messed up the build:

https://issues.apache.org/jira/browse/NETBEANS-239

Before you say it: Yes the bug talks about Fedora, but the mailing list
indicates similar problems on OpenSuSE:

https://www.mail-archive.com/dev@netbeans.incubator.apache.org/msg06452.html

TL;DR: Download a fresh version of ant from apache, put it on the path
and build with that.

HTH

Matthias


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@netbeans.apache.org
For additional commands, e-mail: dev-help@netbeans.apache.org

For further information about the NetBeans mailing lists, visit:
https://cwiki.apache.org/confluence/display/NETBEANS/Mailing+lists




Re: [PATCH] global_score accessor function

Posted by Aaron Bannert <aa...@clove.org>.
On Fri, Dec 07, 2001 at 10:45:57AM -0800, Harrie Hazewinkel wrote:
> >Attached is a patch that provides an accessor function
> >for the global_score portion of the scoreboard.
> >Usefull for mmodules who want to access this
> >portion of the scoreboard.

Hi Harrie,

This isn't applying cleanly for me (my checked-out copy doesn't have
the AP_DECLARE(...) macros around the return types of those accessors).
Are you sure you're working off of HEAD?

-aaron

Re: [contrib] mod_mmap_static.c v0.03

Posted by Dean Gaudet <dg...@arctic.org>.
Me too. 

Dean

On Tue, 10 Feb 1998, Ben Laurie wrote:

> Dean Gaudet wrote:
> > 
> > But then a bunch of other contrib module authors may want their module
> > included and then we'd get support questions about those modules (even
> > though we say we don't support them)... the difference here is that you're
> > probably thinking some day we will support mod_mmap_static, or
> > functionality like it.  Whereas we're probably not going to support
> > mod_sql_database_of_the_month (except in the future by providing a more
> > specialized db lookup API).
> 
> In which case modules/experimental makes the most sense (to me).
> 
> Cheers,
> 
> Ben.
> 
> -- 
> Ben Laurie            |Phone: +44 (181) 735 0686|Apache Group member
> Freelance Consultant  |Fax:   +44 (181) 735 0689|http://www.apache.org
> and Technical Director|Email: ben@algroup.co.uk |Apache-SSL author
> A.L. Digital Ltd,     |http://www.algroup.co.uk/Apache-SSL
> London, England.      |"Apache: TDG" http://www.ora.com/catalog/apache
> 


Re: [contrib] mod_mmap_static.c v0.03

Posted by Ben Hyde <bh...@pobox.com>.
contrib will encourage contributions, I think that's the most
important thing.  attempting to guard against clueless requests
for help from the users is futile.  just pop a read me in there
and take the downside, hope for the upside.  - ben hyde



Re: [contrib] mod_mmap_static.c v0.03

Posted by Rodent of Unusual Size <Ke...@Golux.Com>.
Dirk.vanGulik@jrc.it wrote:
> 
> On Tue, 10 Feb 1998, Ben Laurie wrote:
> 
> Or modules/unsupported :-)

Good point about the meaning of contrib, Dean.

How about modules/nonstandard to parallel modules/standard?

No vested interest in any of these, just a thought..

#ken	P-)}

Re: [contrib] mod_mmap_static.c v0.03

Posted by Di...@jrc.it.
On Tue, 10 Feb 1998, Ben Laurie wrote:

Or modules/unsupported :-)

> 
> In which case modules/experimental makes the most sense (to me).
> 
> Cheers,
> 
> Ben.
> 
> -- 
> Ben Laurie            |Phone: +44 (181) 735 0686|Apache Group member
> Freelance Consultant  |Fax:   +44 (181) 735 0689|http://www.apache.org
> and Technical Director|Email: ben@algroup.co.uk |Apache-SSL author
> A.L. Digital Ltd,     |http://www.algroup.co.uk/Apache-SSL
> London, England.      |"Apache: TDG" http://www.ora.com/catalog/apache
> 


Re: [contrib] mod_mmap_static.c v0.03

Posted by Ben Laurie <be...@algroup.co.uk>.
Dean Gaudet wrote:
> 
> But then a bunch of other contrib module authors may want their module
> included and then we'd get support questions about those modules (even
> though we say we don't support them)... the difference here is that you're
> probably thinking some day we will support mod_mmap_static, or
> functionality like it.  Whereas we're probably not going to support
> mod_sql_database_of_the_month (except in the future by providing a more
> specialized db lookup API).

In which case modules/experimental makes the most sense (to me).

Cheers,

Ben.

-- 
Ben Laurie            |Phone: +44 (181) 735 0686|Apache Group member
Freelance Consultant  |Fax:   +44 (181) 735 0689|http://www.apache.org
and Technical Director|Email: ben@algroup.co.uk |Apache-SSL author
A.L. Digital Ltd,     |http://www.algroup.co.uk/Apache-SSL
London, England.      |"Apache: TDG" http://www.ora.com/catalog/apache

Re: [contrib] mod_mmap_static.c v0.03

Posted by Dean Gaudet <dg...@arctic.org>.
But then a bunch of other contrib module authors may want their module
included and then we'd get support questions about those modules (even
though we say we don't support them)... the difference here is that you're
probably thinking some day we will support mod_mmap_static, or
functionality like it.  Whereas we're probably not going to support
mod_sql_database_of_the_month (except in the future by providing a more
specialized db lookup API).

Shrug. 

Dean

On Mon, 9 Feb 1998, Rodent of Unusual Size wrote:

> Brian Behlendorf wrote:
> > 
> > Er, okay, I forgot the point of extra I guess.  modules/contrib?
> 
> +1!
> 
> #ken	P-)}
> 


Re: [contrib] mod_mmap_static.c v0.03

Posted by Rodent of Unusual Size <Ke...@Golux.Com>.
Brian Behlendorf wrote:
> 
> Er, okay, I forgot the point of extra I guess.  modules/contrib?

+1!

#ken	P-)}

Re: file results format

Posted by Mike Stover <ms...@apache.org>.
On 24 Jan 2003 at 0:56, Michal Kostrzewa wrote:

> >
> > Not exactly.  Yes, the ideas below would allow config of what data gets
> > saved in each sample result, but what I'm talking about is the ability to
> > organize which requests a listener "hears from".  It's not quite the same
> > thing.
> 
> Yes, I see the difference. But there is a problem with current implementation. 
> Log file is bound to visualizer - this leads to two strange things:
> 
>  1) 2 visualizers in the same controller logs exactly the same, has edit boxes 
> to enter file names, and it's not obvious what is it for, in which visualizer 
> you have to enter filename etc (as in todays posts)
> 
>  2) what if you save data in one visualizer in particular controller and load 
> it in other visualizer in totally different context? you'll receive 
> visualized data, but not valid in viewed context (e.g. different paths 
> impossible to reach from visualizer you look at it) In fact, you can load it 
> into visualizers in other jtx file into foreign context...
> 
> Point 1) can be solved by binding log information rather with controller and 
> not visualizer.
> Point 2) tells me that visualizers should not be bound into the test tree (!) 
> at all. In other words - maybe we should control saving data from 
> controllers, (which reasonably groups requests) and view it in worbench or 
> such place? Then visualizers will be just tools to visualize data - whatever 
> you save from whatever place. The visualizers can have common 
> filtering/aggregating GUI... I know it creates new problems....
> One more reflection - I think there is some subtle difference between viewing 
> samples 'on-line' (when they are recored) and 'off-line' (analyzing test 
> results).

I agree entirely with both your points, and I think they boil down to the same problem, which is 
that the test tree is confusing sequential items with heirarchical items.  It would be nice to 
"plug in" a listener into both a) a datasource - your point #2, and b) a controller - your point 
#1.  This is related to what I wrote on the wiki pages at 
http://nagoya.apache.org/wiki/apachewiki.cgi?ImprovingJMetersGUI.  

The more I think about it, the more the "test tree" is turning into more of a "web" in my head.  
There's a sequence of requests, but the rest is hierarchical, but it's only hierarchical because 
it's a tree.  It could easily be more of a "plug in" metaphor where the user draws lines from 
controllers to listeners, from listeners to datasources, etc.  This would be a hard GUI to do, but 
it would make much more sense if done well, I think.  I'm just thinking aloud here - I'm in 
agreement with you that these problems are real.

> 
> I know it looks like I'm splitting hairs, but that is something with design 
> purity of current solution which doesn't let me sleep at nights :-) 

I'm the same way.  When I first started thinking about your datasource ideas long ago, it kind 
of scared me off.  I was going round and round with what was the best way to conceptualize 
it.

> 
> 
> >
> > Ok, now I'm with you.  I agree - but I'd like to enhance our save format to
> > allow multiple test runs to exist in one file and/or allow listeners to
> > load data from multiple sources and combine them in a reasonable way.  I
> > don't think this would be too hard - there just needs to be some
> > information about the test run included (time of start, time of end, for
> > example). And then listeners could use that information to appropriately
> > combine data from multiple test runs.
> 
> So we both agreed about the need of some metadata in test results! You've just 
> said about saving several tests to one file, and tests are distiguished by 
> start/stop date. (my solution is saving multiple tests to one database and 
> has test_id, test_name but the rule is the same)

Yes, start/stop time is necessary in order to calculate throughputs correctly between two tests.  
Test run name is a good thing too.

> 
> 
> >
> > Sounds good so long as we understand there's no real difference between a
> > database and a file system - from the listener's point of view.  In other
> > words, all this should be possible whether you're using a database or just
> > files.  Granted, files may be slower and less efficient, but it should
> > still work either way.
> >
> > Also, this goes back to what I said earlier about saving information about
> > each test run so that listeners can appropriately combine results from
> > multiple test runs.  That's what you are describing here, if I understand
> > rightly.
> 
> Well combining results from multiple tests is one thing, but more common is 
> simply to find db method of doing this:
> tweak-test-parameters    launch test    save-log-to-file-1    clear-results   
> tweak-test-parameters-again   launch test    save-log-to-file-2    
> clear-results  ... and again and again.
> (at least I use jMeter such way :) Where logging to db you cannot create 
> databases for every single test. Test_id's (or knowing start-stop time) 
> solves that problem.

Yes, the db datasource automatically creates new space for the next test (I'm assuming).  
There's no reason a file couldn't do the same.

-Mike

> 
> hmmm, I think I think about it again... :-)
> best regards and good night !
> Michal
> 
> 
> --
> To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
> For additional commands, e-mail: <ma...@jakarta.apache.org>
> 



--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: file results format

Posted by Michal Kostrzewa <M....@pentacomp.com.pl>.
>
> Not exactly.  Yes, the ideas below would allow config of what data gets
> saved in each sample result, but what I'm talking about is the ability to
> organize which requests a listener "hears from".  It's not quite the same
> thing.

Yes, I see the difference. But there is a problem with current implementation. 
Log file is bound to visualizer - this leads to two strange things:

 1) 2 visualizers in the same controller logs exactly the same, has edit boxes 
to enter file names, and it's not obvious what is it for, in which visualizer 
you have to enter filename etc (as in todays posts)

 2) what if you save data in one visualizer in particular controller and load 
it in other visualizer in totally different context? you'll receive 
visualized data, but not valid in viewed context (e.g. different paths 
impossible to reach from visualizer you look at it) In fact, you can load it 
into visualizers in other jtx file into foreign context...

Point 1) can be solved by binding log information rather with controller and 
not visualizer.
Point 2) tells me that visualizers should not be bound into the test tree (!) 
at all. In other words - maybe we should control saving data from 
controllers, (which reasonably groups requests) and view it in worbench or 
such place? Then visualizers will be just tools to visualize data - whatever 
you save from whatever place. The visualizers can have common 
filtering/aggregating GUI... I know it creates new problems....
One more reflection - I think there is some subtle difference between viewing 
samples 'on-line' (when they are recored) and 'off-line' (analyzing test 
results).

I know it looks like I'm splitting hairs, but that is something with design 
purity of current solution which doesn't let me sleep at nights :-) 


>
> Ok, now I'm with you.  I agree - but I'd like to enhance our save format to
> allow multiple test runs to exist in one file and/or allow listeners to
> load data from multiple sources and combine them in a reasonable way.  I
> don't think this would be too hard - there just needs to be some
> information about the test run included (time of start, time of end, for
> example). And then listeners could use that information to appropriately
> combine data from multiple test runs.

So we both agreed about the need of some metadata in test results! You've just 
said about saving several tests to one file, and tests are distiguished by 
start/stop date. (my solution is saving multiple tests to one database and 
has test_id, test_name but the rule is the same)


>
> Sounds good so long as we understand there's no real difference between a
> database and a file system - from the listener's point of view.  In other
> words, all this should be possible whether you're using a database or just
> files.  Granted, files may be slower and less efficient, but it should
> still work either way.
>
> Also, this goes back to what I said earlier about saving information about
> each test run so that listeners can appropriately combine results from
> multiple test runs.  That's what you are describing here, if I understand
> rightly.

Well combining results from multiple tests is one thing, but more common is 
simply to find db method of doing this:
tweak-test-parameters    launch test    save-log-to-file-1    clear-results   
tweak-test-parameters-again   launch test    save-log-to-file-2    
clear-results  ... and again and again.
(at least I use jMeter such way :) Where logging to db you cannot create 
databases for every single test. Test_id's (or knowing start-stop time) 
solves that problem.

hmmm, I think I think about it again... :-)
best regards and good night !
Michal


--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: file results format

Posted by Mike Stover <ms...@apache.org>.
On 23 Jan 2003 at 21:28, Michal Kostrzewa wrote:

> >
> > Except the current implementation allows a user to set up multiple
> > listeners that listen to specific requests.  The listeners work
> > hierarchically too - if you add a listener to a specific controller, it
> > will only log samples from requests under that controller.  If you went to
> > a global file, you'd lose that capability - which I think is useful.
> 
> It'll be good solved by configuration of sampler/controller.

Not exactly.  Yes, the ideas below would allow config of what data gets saved in each 
sample result, but what I'm talking about is the ability to organize which requests a listener 
"hears from".  It's not quite the same thing.

> 
> 
> > I don't really understand why the logged files should be different from
> > listener to listener. Surely XML is slow and bad and we all agree on that,
> > and we'd all like it configurable as to what information is logged, but I
> > strongly object to anything being put in those files that is calculated by
> > a specific listener.  And that is what is being asked for, essentially.
> 
> I've agreed that from the beginning, sorry if I haven't state it clearly. 
> The main problem is that visualizer can't have reporting logic, which can 
> sound strange, but without access to raw data source visualizer can't take 
> advance of db aggregation. On the other side we can give such access to the 
> visualizers, but then they won't be independent from logging logic. I still 
> have conception problems with that.

Ok, now I'm with you.  I agree - but I'd like to enhance our save format to allow multiple test 
runs to exist in one file and/or allow listeners to load data from multiple sources and 
combine them in a reasonable way.  I don't think this would be too hard - there just needs 
to be some information about the test run included (time of start, time of end, for example).  
And then listeners could use that information to appropriately combine data from multiple 
test runs.  

Not all listeners, of course - graph listener would benefit little from this, but it's fine to leave 
it up to the individual listener component to implement this or not.  The important thing is to 
make the info available in the data.

> 
> >>
> > Sounds great.  I would vote for just eliminating the XML format and going
> > to CSV (for files).
> 
> Some people *love* XML, fluently uses tools for evaluating reports, and got 
> used to it, it should stay. Fast visualizers are not necessary for these 
> people.

Yes, I suppose you're right.

> [snip]
> 
> Well - imagine such situation. You have an application to test, and you want 
> to draw plot describing application response time (on Y axis) v.s. the count 
> of users (on X axis), to determine scalability of the application. So you 
> have to make several stress tests with increasing thread number. You want to 
> log results to the database. Probably you want to create only *one* db and 
> that DB has table storing test results. However you want to make SQL queries 
> selecting only choosen test (for the n-th user) - you have to have some way 
> to assign sample result in the table to given test. This is no big deal when 
> saving to files - you just name that files appropriately 1-user.jmx, 
> 10-users.jmx ... 100-users.jmx. 
> Extension to that idea is to make fields to describe the test - who, when, 
> subject of test, description and so on, to provide some orgranization. Again, 
> when using files it's simple - you could name file like 
> test_done_2000_10_10_by_MKO_10_users_failed_because_of_application_errors.jmx 
> but it's not standarized, not elegant and impossible for DB's.

Sounds good so long as we understand there's no real difference between a database and 
a file system - from the listener's point of view.  In other words, all this should be possible 
whether you're using a database or just files.  Granted, files may be slower and less 
efficient, but it should still work either way.  

Also, this goes back to what I said earlier about saving information about each test run so 
that listeners can appropriately combine results from multiple test runs.  That's what you 
are describing here, if I understand rightly.

-Mike

> 
> How with that?
> best regards
> Michal Kostrzewa
> 
> 
> 
> --
> To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
> For additional commands, e-mail: <ma...@jakarta.apache.org>
> 



--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: File produced by the AggregateReport listener

Posted by Michal Kostrzewa <M....@pentacomp.com.pl>.
>
> Except the current implementation allows a user to set up multiple
> listeners that listen to specific requests.  The listeners work
> hierarchically too - if you add a listener to a specific controller, it
> will only log samples from requests under that controller.  If you went to
> a global file, you'd lose that capability - which I think is useful.

It'll be good solved by configuration of sampler/controller.


> I don't really understand why the logged files should be different from
> listener to listener. Surely XML is slow and bad and we all agree on that,
> and we'd all like it configurable as to what information is logged, but I
> strongly object to anything being put in those files that is calculated by
> a specific listener.  And that is what is being asked for, essentially.

I've agreed that from the beginning, sorry if I haven't state it clearly. 
The main problem is that visualizer can't have reporting logic, which can 
sound strange, but without access to raw data source visualizer can't take 
advance of db aggregation. On the other side we can give such access to the 
visualizers, but then they won't be independent from logging logic. I still 
have conception problems with that.

>>
> Sounds great.  I would vote for just eliminating the XML format and going
> to CSV (for files).

Some people *love* XML, fluently uses tools for evaluating reports, and got 
used to it, it should stay. Fast visualizers are not necessary for these 
people.

> > particular requests or groups of requests to exclude not interesting
> > requests and to save the disk space.
>
> Cool.  While we're there - why not a full config option for each request to
> indicate what should be saved and what should be tossed?  It could be a
> simple button that opens up a dialog screen to configure the report for
> that request or controller.  Or, it could be a new kind of test element -
> Log Config.

Yes! I've meant exactly that, I'll post this today I think (I'm checking it 
now)

>
> > Another feature can be "test results desktop" and "test results metadata"
> > - with that you can describe/view/erase your test results. I've encoured
> > this problem in jdbc logging, when I can't just log test to named file -
> > there has to be some key to distinguish tests (in jdbc logging it's
> > implemented as test_id field). It will also helps to keep test results in
> > order.
>
> I didn't understand this.

Well - imagine such situation. You have an application to test, and you want 
to draw plot describing application response time (on Y axis) v.s. the count 
of users (on X axis), to determine scalability of the application. So you 
have to make several stress tests with increasing thread number. You want to 
log results to the database. Probably you want to create only *one* db and 
that DB has table storing test results. However you want to make SQL queries 
selecting only choosen test (for the n-th user) - you have to have some way 
to assign sample result in the table to given test. This is no big deal when 
saving to files - you just name that files appropriately 1-user.jmx, 
10-users.jmx ... 100-users.jmx. 
Extension to that idea is to make fields to describe the test - who, when, 
subject of test, description and so on, to provide some orgranization. Again, 
when using files it's simple - you could name file like 
test_done_2000_10_10_by_MKO_10_users_failed_because_of_application_errors.jmx 
but it's not standarized, not elegant and impossible for DB's.

How with that?
best regards
Michal Kostrzewa



--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


RE: Externals using absolute path on Windows

Posted by David Hickman <da...@audleytravel.com>.
on Mon, Apr 23, 2012 at 6:59 AM, David Hickman <da...@audleytravel.com>> wrote:
Hi All,

I'm trying to make a plan for migrating some of our source code to SVN and am looking at using externals to deal with some of our code that is shared between libraries.  In doing this I have come across the issue number 4073 (Assert on Windows absolute paths in svn:externals) which is preventing me from being able to map our externals to absolute paths (we are all Windows based machines here).

The link for the issue is below:
http://subversion.tigris.org/issues/show_bug.cgi?id=4073

Is anyone aware of any work arounds for issue number 4073?  I guess using relative paths is likely to work but that would require changes to a number of our projects that we would rather not tackle at this time.  Are there any special characters or special methods of formatting the path that I can use to get the externals to work?  If not does anyone have any idea about the timescales for the version in which this issue is likely to be resolved?


If any more details are needed then please just let me know.


You didn't mention *which* Subversion codebase or binary you're using. Check. Then test it with TortoiseSVN: that will let you know if it's fixed in the current 1.7.4 codebases, as hinted by that bug report.


>>>>>

Good point, sorry about that.  The version information from my about screen of TortoiseSVN is:

TortoiseSVN 1.7.6, Build 22632 - 32 Bit , 2012/03/08 18:29:39
Subversion 1.7.4,
apr 1.4.5
apr-utils 1.3.12
neon 0.29.6
OpenSSL 1.0.0g 18 Jan 2012
zlib 1.2.5

SVN is a new project for us and we've only ever used this version so it appears to be an outstanding issue.


Best Regards,
David



Re: svn copy breaks on svn update

Posted by Jan Hendrik <li...@gmail.com>.
Concerning Re: svn copy breaks on svn update
Stefan Sperling wrote on 27 Aug 2009, 19:41, at least in part:

> It sounds like a serious problem. But to fix it, we need to isolate
> the cause, and to isolate it, we need more information, and the only
> person on this planet capable of digging for more information about
> this problem is you. Unless someone else shows up who is also seeing
> this problem.

Definitely.  I hope I have some information which might tell you 
more than it does to myself, and probably even a clue.

> Can you reproduce the problem on a copy of the repository which is
> causing trouble? Can you identify what revision has caused this?

Yes, that's easy (-r1000 for simplicity).  IIRC it also was the head 
rev. when trouble started while updating wc2.  wc2 might have been 
at rev. 999 or a bit earlier, but not much.  Update obviously broke 
when adding the files copied (and edited) to "common" folder as 
committed from wc1 in rev. 1000.  At this time the source files 
(foox.php) already had been updated to rev. 1000, too.

> Have you run svnadmin verify on a copy of the repository?
> What does it have to say?

successful for rev. 1000 (actually checked revs. 995:HEAD, then at 
1005, if necessary I can run full verification over the weekend).

Also I could checkout rev. 1000 w/o issues.

Next I did

svn co -r999 (no issues);

svn up (fails, wc locked, should do cleanup); <= on fresh checkout?

svn cleanup (successful);

svn up (successful to rev. 1005, so it seems my fear of ending up 
with a repository which would not return revs. 1000+ was 
unnecessary).

Also an update of wc1 worked.  Dito I was able to update the 
"common" folder in another wc, then the wc itself w/o issues.  This 
wc had no pending modifications (also see below).

> Can you try with a 1.6.x client instead of a 1.5.x one just to see if
> it makes a difference?

Next steps:

completely removed the "common" folder in one of the broken 
working copies (wc3) and run svn up.  This failed once more in 
"common" after adding some of the old files in the folder (but not 
all) *and* some of the files copied into per rev. 1000, leaving behind 
a couple of the by now well-known triplets *.mine/*.copied/*.rxxx 
and the error message "Can't open file ... System can't find file".

Removed "common" once more and did

svn-1.6.4 up (fails, locked)

svn-1.6.4 cleanup (successful)

svn-1.6.4 up (successful)

So the 1.6 client might make a difference indeed.  However, here's 
probably a clue:

On closer inspection all the working copies broken during update in 
the "common" folder had pending modifications and some of these 
raised conflicts in the source files (foox.php).  This is a factor I did 
not try in the test case script.

These errors were expected and resolved using incoming "theirs" 
as of rev. 1000.  Actually the coincidence of several users working 
on non-conflicting parts of foox.php with the necessity to do each 
modification in a number of almost identical siblings lead to the 
minor refactoring leading to the whole issue of working copies 
broken on update.

Can it be this?  An issue solved in 1.6?  Or just hidden by chance 
and apt to raise its head again in the future?

> The most important thing, of course, is don't panic :)

Well, we simply don't have the time to panic ;)

> Even if there is a broken revision in there somewhere, you can still
> restore the repository to a sane state by dumping it incrementally and
> fixing revisions as you go along.
> 
> And you do have backups, right?

Every commit is dumped incrementally and appended to a dump 
file.  After a specific number of revisions the dump file is stored 
away, and after a few such dump file cycles another one is done for 
the whole range, sometimes additionally even in-between.  Of 
course, in case the revision of a refactoring is corrupted it might be 
very difficult to impossible to re-do that in a way the later revisions 
can be loaded.

Looks like the users here have to get used to the commandline 
client.  They are used to TSVN, but I am told the developer dropped 
W2K with 1.6 and will drop XP with 1.7.  We never saw any 
necessity to go for XP except for one machine (BIOS/hardware 
prevents W2K from full operation) and definitely will not go for Vista.

Or do any copy-rename-edit operations only when all working 
copies have been committed.  Just as we never switch anymore 
but checkout another wc ... checkout takes time once, switch the 
same amount, but every time.

Thanks for your time, Stefan!

JH
---------------------------------------
Freedom quote:

     It is not my intention to do away with government.
     It is rather to make it work -- work with us, not over us;
     stand by our side, not ride on our back.
     Government can and must provide opportunity, not smother it;
     foster productivity, not stifle it.
               -- Ronald Reagan,
                      First Inaugural Address, January 20, 1981

------------------------------------------------------
http://subversion.tigris.org/ds/viewMessage.do?dsForumId=1065&dsMessageId=2388301

To unsubscribe from this discussion, e-mail: [users-unsubscribe@subversion.tigris.org].

Re: svn copy breaks on svn update

Posted by Stefan Sperling <st...@elego.de>.
On Thu, Aug 27, 2009 at 07:38:31PM +0200, Jan Hendrik wrote:
> Concerning Re: svn copy breaks on svn update
> Stefan Sperling wrote on 27 Aug 2009, 15:15, at least in part:
> 
> > On Thu, Aug 27, 2009 at 01:35:35PM +0200, Jan Hendrik wrote:
> > > > Can you provide a script that starts with an empty repository and
> > > > ends with this error?
> > > 
> > > Well, here is a script which should be close enough, except that it
> > > does not produce yesterday's experience.  IOW with this things work
> > > as one would expect.  Besides the test file "foo.php" attached.
> > 
> > I don't understand. The script does not reproduce the problem at all?
> > What do you want people to do with foo.php?
> 
> I don't understand either what is going on.  Definitely I cannot 
> update any working copy anymore without breaking it the moment 
> update hits that "common" folder.  Yet I can't reproduce it either 
> with the script.

It sounds like a serious problem. But to fix it, we need to isolate
the cause, and to isolate it, we need more information, and the only
person on this planet capable of digging for more information about
this problem is you. Unless someone else shows up who is also seeing
this problem.
 
> As far as I can tell the script reproduces what the user did, with 
> respect to what he had to do, what he says he did, and what the 
> log tells:

There must be something different. Else it would just work.

Can you reproduce the problem on a copy of the repository which is
causing trouble? Can you identify what revision has caused this?

Have you run svnadmin verify on a copy of the repository?
What does it have to say?

Can you try with a 1.6.x client instead of a 1.5.x one just to see
if it makes a difference?

> As things like that usually hit at the worst moment we can't fool 
> around endlessly.  So I manually copied the changes into the other 
> users' working copies, and when they tell me they are ready to 
> commit I copy their changes back into the original user's working 
> copy and commit (as this is the only working copy knowing about 
> those files copied to "common").  In our small team this has to 
> work for the next few days.  However, I suppose that we'll find that 
> the repository has been rendered unusable and therefore lost with 
> all history.

The most important thing, of course, is don't panic :)

Even if there is a broken revision in there somewhere, you can still
restore the repository to a sane state by dumping it incrementally
and fixing revisions as you go along.

And you do have backups, right?

Stefan

------------------------------------------------------
http://subversion.tigris.org/ds/viewMessage.do?dsForumId=1065&dsMessageId=2387978

To unsubscribe from this discussion, e-mail: [users-unsubscribe@subversion.tigris.org].

RE: How should permissions be set in a shared workspace environment?

Posted by James Oltmans <jo...@bolosystems.com>.
What about if the user uses TortoiseSVN over samba?



-----Original Message-----
From: Steve Bakke [mailto:steven.bakke@amd.com] 
Sent: Wednesday, April 11, 2007 3:39 PM
To: James Oltmans; Matt Sickler
Cc: users@subversion.tigris.org
Subject: Re: How should permissions be set in a shared workspace environment?




On 4/11/07 4:42 PM, "James Oltmans" <jo...@bolosystems.com> wrote:

> We realize it is not the best idea and that by using individual workspaces we
> would automatically avoid this problem. However, we have other problems that
> make the individual workspaces solution a very unattractive option at this
> time. 
> We have tried giving them their own copies. They did not like the 20-30 mins
> rebuild turnaround time (we do not have a quick-build option, we are working
> with UniData, not C++ or Java) and the fact that any time they wanted to see a
> project team-member¹s contribution they needed to rebuild. We also managed to
> fill up the hard drive on the server pretty quickly with 2.5 gig a pop
> workspaces. 
>  
> Given these challenges we would prefer to use a shared workspace. If you know
> for certain this is not possible, that is an acceptable answer; otherwise we¹d
> like to know if anyone has experience with doing things ³the wrong way² with a
> shared workspace.
>  

Using a shared working copy is definitely possible. (we are currently doing
just that)  You need to make sure people's umask is set to be 2.  That said,
the way that we enforce it is that the commandline client is wrapped in a
script which automatically sets the permissions to be user+group rwx for any
directories and rw for any files.

Obviously when users create new directories on their own, they still need to
set proper permissions.  Just make sure that they source a standard shell
init script or something to make sure their umask is set.

-steve



> 
> 
> From: Matt Sickler [mailto:crazyfordynamite@gmail.com]
> Sent: Wednesday, April 11, 2007 2:07 PM
> To: James Oltmans
> Cc: users@subversion.tigris.org
> Subject: Re: How should permissions be set in a shared workspace environment?
>  
> its normal for several devs to work on the same _repository_  but having more
> than one per _working copy_ is a very bad idea
> at least try to get each one their own copy and problems like this are
> automatically avoided
> 
> On 4/10/07, James Oltmans <jo...@bolosystems.com> wrote:
> 
> Hey all, I've got a unix question.
> 
>  
> 
> We're working in an environment where one or more developers will access and
> work on code in the same repository. Yes, I know that's not standard practice
> but we're dealing with some space limitations, developer impatience (no one
> likes to wait 30 minutes to rebuild) and no good way to only rebuild part of
> the working copy.
> 
> Anyway, the subversion problem we're having is that developer A connect to his
> workspace, edits some files, checks them in and is happy. Developer B connects
> to the same workspace, alters the same or different files in the same
> directory that developer A edited files. Developer B checks in his code and
> gets the following lovely error:
> 
> Sending       foo/bar/FILE1
> 
> Transmitting file data .svn: Commit succeeded, but other errors follow:
> 
> svn: Error bumping revisions post-commit (details follow):
> 
> svn: In directory '/Š/foo/bar'
> 
> svn: Error processing command 'committed' in '/Š/foo/bar'
> 
> svn: Error replacing text-base of 'FILE1'
> 
> svn: Can't change perms of file /Š/foo/bar/FILE1': Operation not permitted
> 
> svn: Your commit message was left in a temporary file:
> 
> svn:    '/Š/foo/bar/svn-commit.tmp'
> 
>  
> 
> This leaves directory bar locked. Running svn cleanup can sometimes resolve
> the problem unless another svn file is owned by Developer A. In the case that
> it's not owned by A and cleanup works, Developer B must subsequently run svn
> update to get the .svn directory up to date. This usually results in files
> that were only changed once to be flagged as merGed during the update because
> the repo version and the current version are the same but the old text-base
> version is still in the old state.
> 
>  
> 
> We're running on Red Hat Enterprise Linux ES release 4 (Nahant Update 3) with
> svn, version 1.4.0 (r21228)
> 
> All developers are part of groupA and all files are read/writable by the
> group. However, our .svn dirs look like the following:
> 
> total 36
> 
> -r--r----- 1 DeveloperA groupA 232 Apr 10 18:57 all-wcprops
> 
> -r--r----- 1 DeveloperA groupA 58 Apr 10 12:58 dir-prop-base
> 
> -r--r----- 1 DeveloperA groupA 59 Apr 10 19:09 dir-props
> 
> -r--r----- 1 DeveloperA groupA 475 Apr 10 19:09 entries
> 
> -r--r----- 1 DeveloperA groupA 2 Apr 10 12:58 format
> 
> drwxrwx--- 2 DeveloperA groupA 4096 Apr 10 12:58 prop-base
> 
> drwxrwx--- 2 DeveloperA groupA 4096 Apr 10 12:58 props
> 
> drwxrwx--- 2 DeveloperA groupA 4096 Apr 10 12:58 text-base
> 
> drwxrwx--- 5 DeveloperA groupA 4096 Apr 10 19:09 tmp
> 
>  
> 
> Is there some default set of permissions that Subversion uses when creating
> these files? How do I get around this permissions issue when the files that
> are being denied access were created by Subversion?
> 
>  
> 
> Thanks!
> 
> James Oltmans
> SCM Administrator
> 
> Bolo Systems, Inc.
> 
>  
> 
>  
> 
>  
> 




---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org


RE: Release/Branching best practices

Posted by James Oltmans <JO...@bolosystems.com>.
Thanks for your response Ian.

Currently we do something like in picture at the bottom. We spawn
projects (p1222) as branches off of releases (prd_7.2.2).

Releases are cut from the trunk. Bugs and projects are moved to the
trunk when they are complete and approved. Our issue is that we have
trouble identifying what is a part of each release. For instance, if we
had 12 bugs come from the bugs branch and 2 projects come in then cut
our release and moved 3 bugs in to the trunk and then found issues with
the release. We fix the release and then merge back to the trunk but
this starts to get convoluted as to which commits belong to which
release. Our current solution is to require that developers specify in
their log messages (enforced via a pre-commit hook) which release their
fix/project belongs to. This should help us scrape the log messages to
identify which projects and bugs went into a release. 

 

Our other issue is keeping the bugs branch and trunk in-synch. With each
release we merge everything over to bugs (generally this will reapply
fixed bugs and move projects over). However, there's never a guarantee
that it won't screw up the bugs team's development process.

 

Note: We used to have a separate QA branch to keep the trunk always
stable for spawning projects, but there's no point in doing that since
we now spawn projects off the required release branch.

 
<http://sp1/rd/scm/Help%20images/Source%20PNGs/Merge%20Outline%20Detaile
d_v3.0.png> 

 

________________________________

From: Ian Wood [mailto:Ian.Wood@sucden.co.uk] 
Sent: Wednesday, November 28, 2007 2:33 AM
To: James Oltmans; users@subversion.tigris.org
Subject: RE: Release/Branching best practices

 

Hi James,

 

This is how we do it. 

 

We have a repo as below.

 

>Trunk

>Branches

 >Versions

  >1_0_0

  >1_0_1

>Tags

 >SuccessfulBuilds

 >1_0_0

  >1_0_0_1

  >1_0_0_2

 

The main work is done on the Trunk. Then each month we make a Version
branch of the current months version, ( just the first three numbers,
the forth is determined by the CruiseControl machine ).

 

The code on this version branch is released to the test team and tested
and any bugs found are then fixed on that branch and released again. 

 

Then when that code is released to live the changes made are merged back
to the Trunk and another branch is taken. 

 

Indecently each time the Version branch builds successfully a Tag is
taken with the version number and a deployment script is created. 

 

We are not finding it too burdensome, the only problem we have found is
when people make changes to the same code in both places without merging
as they go. 

 

What are currently doing?

 

Best regards,

 

Ian

 

 

 

 

 

________________________________

From: James Oltmans [mailto:JOltmans@bolosystems.com] 
Sent: 28 November 2007 00:55
To: users@subversion.tigris.org
Subject: Release/Branching best practices

 

Hello all,

 

Could someone point me in the right direction for finding best-practices
or software to manage releases? We are trying to use a monthly release
cycle and our current branch and merge management is becoming a bit
burdensome.

 

Thanks,
James

 

www.sucden.co.uk <http://www.sucden.co.uk/> 

Sucden (UK) Limited, 5 London Bridge Street, London SE1 9SG
Telephone +44 20 7940 9400
 
Registered in England no. 1095841
VAT registration no. GB 446 9061 33

Authorised and Regulated by the Financial Services Authority (FSA) and
entered in the FSA register under no. 114239

 

This email, including any files transmitted with it, is confidential and
may be privileged. It may be read, copied and used only by the intended
recipient. If you are not the intended recipient of this message, please
notify postmaster@sucden.co.uk immediately and delete it from your
computer system.

 

We believe, but do not warrant, that this email and its attachments are
virus-free, but you should check. 

 

Sucden (UK) Ltd may monitor traffic data of both business and personal
emails. By replying to this email, you consent to Sucden's monitoring
the content of any emails you send to or receive from Sucden. Sucden is
not liable for any opinions expressed by the sender where this is a
non-business email.

The contents of this e-mail do not constitute advice and should not be
regarded as a recommendation to buy, sell or otherwise deal with any
particular investment.

This message has been scanned for viruses by BlackSpider MailControl
<http://www.blackspider.com/> 


Re: log and stat: where is server?

Posted by a...@test123.ru.
Hi, I just installed ATS 2.1.4 on fresh ubuntu 10.01. In config I only changed timeouts, proxy port and allowed transparent proxy. Same problem - no host in log. Not possible to analyze logs. Hope you will have time to look at this in 2011 :)

Merry Christmas and Happy New Year!

On Thu, 16 Dec 2010 16:19:15 -0700, Leif Hedstrom <zw...@apache.org> wrote:
> On 12/16/2010 12:08 AM, a@test123.ru wrote:
>> Thanx, I found rolling option, it is really documented in admin guide.
>> I am running ATS 2.1.4-unstable, transparent mode. I have no rules for Host: header (actually, I
>> even do not know how to create them and why). traffic_logcat produces 2 kinds of records: w/o
>> server and with server after DIRECT.
> 
> Hmmm, maybe something then with transparent proxy, I haven't really 
> spent a lot of time testing it. Alan, any thoughts on these issues? 
> Anything in the logs that would cause our tools not to properly handle 
> the "host" ?
> 
> -- leif
> 
>> 1292482820.160 240 192.168.108.98 TCP_MISS/200 491 GET http:///data/mail.js?yaru=y -
>> DIRECT/www.yandex.ru text/javascript -
>> 1292482820.220 20 192.168.108.98 TCP_HIT/200 705 GET http:///jquery-1-4-2.crossframeajax.html -
>> NONE/- text/html -
>>
>>
>> On Tue, 14 Dec 2010 14:00:08 -0700, Leif Hedstrom<zw...@apache.org>  wrote:
>>> On 12/14/2010 12:30 PM, a@test123.ru wrote:
>>>> Sorry, forgot yet another question. ATS rotates log everyday, how do I change it to rotate
>>>> weekly? Can't find the option in records.config...
>>> CONFIG proxy.config.log.rolling_interval_sec INT 604800
>>>
>>>
>>> You can also roll on size of the log files etc., fairly certain it's
>>> documented in the Admin guide?
>>>
>>>
>>> As for your other problem, do you have some rule that matches on
>>> requests without Host: headers? What version of ATS are you using? I'm
>>> not seeing anything like that in my run of logcat. What is odd is that
>>> traffic_logstats ought to show the same as traffic_logcat, logstats will
>>> just parse the data in some different ways (but it'd require the same
>>> "host" information in the URL to work).
>>>
>>> -- leif
>>


Re: log and stat: where is server?

Posted by Leif Hedstrom <zw...@apache.org>.
On 12/16/2010 12:08 AM, a@test123.ru wrote:
> Thanx, I found rolling option, it is really documented in admin guide.
> I am running ATS 2.1.4-unstable, transparent mode. I have no rules for Host: header (actually, I even do not know how to create them and why). traffic_logcat produces 2 kinds of records: w/o server and with server after DIRECT.

Hmmm, maybe something then with transparent proxy, I haven't really 
spent a lot of time testing it. Alan, any thoughts on these issues? 
Anything in the logs that would cause our tools not to properly handle 
the "host" ?

-- leif

> 1292482820.160 240 192.168.108.98 TCP_MISS/200 491 GET http:///data/mail.js?yaru=y - DIRECT/www.yandex.ru text/javascript -
> 1292482820.220 20 192.168.108.98 TCP_HIT/200 705 GET http:///jquery-1-4-2.crossframeajax.html - NONE/- text/html -
>
>
> On Tue, 14 Dec 2010 14:00:08 -0700, Leif Hedstrom<zw...@apache.org>  wrote:
>> On 12/14/2010 12:30 PM, a@test123.ru wrote:
>>> Sorry, forgot yet another question. ATS rotates log everyday, how do I change it to rotate
>>> weekly? Can't find the option in records.config...
>> CONFIG proxy.config.log.rolling_interval_sec INT 604800
>>
>>
>> You can also roll on size of the log files etc., fairly certain it's
>> documented in the Admin guide?
>>
>>
>> As for your other problem, do you have some rule that matches on
>> requests without Host: headers? What version of ATS are you using? I'm
>> not seeing anything like that in my run of logcat. What is odd is that
>> traffic_logstats ought to show the same as traffic_logcat, logstats will
>> just parse the data in some different ways (but it'd require the same
>> "host" information in the URL to work).
>>
>> -- leif
>


Re: log and stat: where is server?

Posted by a...@test123.ru.
Thanx, I found rolling option, it is really documented in admin guide.
I am running ATS 2.1.4-unstable, transparent mode. I have no rules for Host: header (actually, I even do not know how to create them and why). traffic_logcat produces 2 kinds of records: w/o server and with server after DIRECT.

1292482820.160 240 192.168.108.98 TCP_MISS/200 491 GET http:///data/mail.js?yaru=y - DIRECT/www.yandex.ru text/javascript -
1292482820.220 20 192.168.108.98 TCP_HIT/200 705 GET http:///jquery-1-4-2.crossframeajax.html - NONE/- text/html -


On Tue, 14 Dec 2010 14:00:08 -0700, Leif Hedstrom <zw...@apache.org> wrote:
> On 12/14/2010 12:30 PM, a@test123.ru wrote:
>> Sorry, forgot yet another question. ATS rotates log everyday, how do I change it to rotate
>> weekly? Can't find the option in records.config...
> 
> CONFIG proxy.config.log.rolling_interval_sec INT 604800
> 
> 
> You can also roll on size of the log files etc., fairly certain it's 
> documented in the Admin guide?
> 
> 
> As for your other problem, do you have some rule that matches on 
> requests without Host: headers? What version of ATS are you using? I'm 
> not seeing anything like that in my run of logcat. What is odd is that 
> traffic_logstats ought to show the same as traffic_logcat, logstats will 
> just parse the data in some different ways (but it'd require the same 
> "host" information in the URL to work).
> 
> -- leif



Re: log and stat: where is server?

Posted by Leif Hedstrom <zw...@apache.org>.
On 12/14/2010 12:30 PM, a@test123.ru wrote:
> Sorry, forgot yet another question. ATS rotates log everyday, how do I change it to rotate weekly? Can't find the option in records.config...

CONFIG proxy.config.log.rolling_interval_sec INT 604800


You can also roll on size of the log files etc., fairly certain it's 
documented in the Admin guide?


As for your other problem, do you have some rule that matches on 
requests without Host: headers? What version of ATS are you using? I'm 
not seeing anything like that in my run of logcat. What is odd is that 
traffic_logstats ought to show the same as traffic_logcat, logstats will 
just parse the data in some different ways (but it'd require the same 
"host" information in the URL to work).

-- leif



Re: svn log questions

Posted by François Beausoleil <fb...@ftml.net>.

On 10/12/2004 12:05, trlists@clayst.com wrote:
> What switch (if there is one) means "run the subcommand on a default 
> target of CWD at the most recent revision which has a committed change 
> to CWD"?

svn log -rhead:1 .

> What exactly is getting updated when you do an svn update and nothing 
> has actually changed except the single file you just committed?  Is it 
> just the revision number in .svn/entries?

Yup!

Bye,
François

Re: [PATCH] Improve test cases for svn.wc python bindings

Posted by Daniel Rall <dl...@collab.net>.
It doesn't appear that the entirety of this patch was committed to the
Python bindings.  What is it's status?

- Dan

On Fri, 23 Jun 2006, Madan S. wrote:

> Hi,
> 
> On Fri, 23 Jun 2006 15:54:52 +0530, Jelmer Vernooij <je...@samba.org>  
> wrote:
> 
> [snip]
> >Sorry, I've been busy with other things. I'll try to have a look at your
> >patch over this weekend.
> 
> I have modified the patch as per your comments on irc (retain assert_ for  
> existing tests). Pl. find attached.
> There is a line in the setUp() function where the change is the same as in  
> my other patch at  
> http://subversion.tigris.org/servlets/ReadMsg?list=dev&msgNo=117063. FYI.
> 
> [[[
> Improve existing tests for the svn.wc python binding.
> 
> * subversion/bindings/swig/python/tests/wc.py
>   (setUp): Modify adm_open3() call to open the whole working-copy tree.
>   (test_check_wc): Add invalid-file case.
>   (test_get_ancestry): Add invalid-file case.
>   (test_status): Add cases with varying input.
> ]]]
> 
> Regards,
> Madan.

> Index: subversion/bindings/swig/python/tests/wc.py
> ===================================================================
> --- subversion/bindings/swig/python/tests/wc.py	(revision 20225)
> +++ subversion/bindings/swig/python/tests/wc.py	(working copy)
> @@ -31,7 +31,7 @@
>      client.checkout2(self.repos_url, self.path, rev, rev, True, True, 
>              client_ctx)
>  
> -    self.wc = wc.adm_open3(None, self.path, True, 0, None)
> +    self.wc = wc.adm_open3(None, self.path, True, -1, None)
>  
>    def test_entry(self):
>        wc_entry = wc.entry(self.path, self.wc, True)
> @@ -75,14 +75,34 @@
>  
>    def test_check_wc(self):
>        self.assert_(wc.check_wc(self.path) > 0)
> +      self.assertRaises(SubversionException, wc.check_wc,
> +                        os.path.join(self.path,"NONEXISTANTFILE"))
>  
>    def test_get_ancestry(self):
>        self.assertEqual([self.repos_url, 12], 
>                         wc.get_ancestry(self.path, self.wc))
> +      self.assertRaises(SubversionException,
> +                        wc.get_ancestry,
> +                        os.path.join(self.path, "NONEXISTANTFILE"),
> +                        self.wc)
>  
>    def test_status(self):
>        wc.status2(self.path, self.wc)
>  
> +      # Prepare for the tests: Remove a versioned file, add an unversioned file
> +      removed_versioned_file = os.path.join(self.path, "trunk", "README.txt")
> +      unversioned_file = os.path.join(self.path, "UNVERSIONEDFILE")
> +      nonexistant_file = os.path.join(self.path, "NONEXISTANTFILE")
> +      os.remove(removed_versioned_file)
> +      open(unversioned_file, 'w').close()
> +
> +      self.assertEqual(wc.status2(removed_versioned_file, self.wc).text_status,
> +                       wc.svn_wc_status_missing)
> +      self.assertEqual(wc.status2(nonexistant_file, self.wc).text_status,
> +                       wc.svn_wc_status_none)
> +      self.assertEqual(wc.status2(unversioned_file, self.wc).text_status,
> +                       wc.svn_wc_status_unversioned)
> +
>    def test_is_normal_prop(self):
>        self.failIf(wc.is_normal_prop('svn:wc:foo:bar'))
>        self.failIf(wc.is_normal_prop('svn:entry:foo:bar'))

> Improve existing tests for the svn.wc python binding.
> 
> * subversion/bindings/swig/python/tests/wc.py
>   (setUp): Modify adm_open3() call to open the whole working-copy tree.
>   (test_check_wc): Add invalid-file case.
>   (test_get_ancestry): Add invalid-file case.
>   (test_status): Add cases with varying input.

Re: [PATCH] Improve test cases for svn.wc python bindings

Posted by Madan U Sreenivasan <ma...@collab.net>.
Hi,

On Fri, 23 Jun 2006 15:54:52 +0530, Jelmer Vernooij <je...@samba.org>  
wrote:

[snip]
> Sorry, I've been busy with other things. I'll try to have a look at your
> patch over this weekend.

I have modified the patch as per your comments on irc (retain assert_ for  
existing tests). Pl. find attached.
There is a line in the setUp() function where the change is the same as in  
my other patch at  
http://subversion.tigris.org/servlets/ReadMsg?list=dev&msgNo=117063. FYI.

[[[
Improve existing tests for the svn.wc python binding.

* subversion/bindings/swig/python/tests/wc.py
   (setUp): Modify adm_open3() call to open the whole working-copy tree.
   (test_check_wc): Add invalid-file case.
   (test_get_ancestry): Add invalid-file case.
   (test_status): Add cases with varying input.
]]]

Regards,
Madan.

Re: [PATCH] Improve test cases for svn.wc python bindings

Posted by Jelmer Vernooij <je...@samba.org>.
On Thu, 2006-06-22 at 17:31 +0530, Madan U Sreenivasan wrote:
> On Mon, 19 Jun 2006 14:50:34 +0530, Madan U Sreenivasan <ma...@collab.net>  
> wrote:
> 
> > On Sat, 17 Jun 2006 17:38:01 +0530, Madan U Sreenivasan  
> > <ma...@collab.net> wrote:
> >
> >> On Sat, 17 Jun 2006 15:25:35 +0530, Jelmer Vernooij <je...@samba.org>  
> >> wrote:
> > [snip]
> >>> Looks good! Two small comments from a quick glance over your patch:
> >>> Why the assert_ -> failUnless change?
> >>
> >> IIUC, assert_ will cause and assertion and hence an error. failUnless  
> >> will cause a failure (test failure)
> >
> > No, I was wrong, assert_() and failUnless() effectively do the same. But  
> > I still feel that the name `failUnless' makes more sense in a unit-test  
> > scenario. What do you think?
> >
> >>> I'm not sure whether adding tests that fail are a good idea - it makes
> >>> it harder to catch real regressions. Can you comment out the failing
> >>> ones and a TODO or send a fix along that fixes the actual bug?
> >>
> >> Good idea. Will do that, and send the patch again. Thanks. :)
> >
> > Jelmer: I have this patch ready... will resend this patch, after getting  
> > done, what you have to say about assert_() vs failUnless(). Hope this is  
> > okay.
> 
> Jelmer...?
Sorry, I've been busy with other things. I'll try to have a look at your
patch over this weekend.

Cheers,

Jelmer
-- 
Jelmer Vernooij <je...@samba.org> - http://samba.org/~jelmer/

Re: Secure SSL connections between Solr and ZooKeeper

Posted by Sam Lee <sa...@yahoo.com.INVALID>.
On 2022-03-25 03:43 +0000, Sam Lee wrote:
> The solution is to add the appropriate ZooKeeper Java properties. Notice
> that these are exactly the same properties needed by standalone
> ZooKeeper's 'zkServer.sh' and 'zkCli.sh' to connect to ZooKeeper via
> SSL [1] [2]. Add the following to bin/solr.in.sh:
>
> --8<---------------cut here---------------start------------->8---
> SOLR_OPTS="$SOLR_OPTS
>     -Dzookeeper.client.secure=true
>     -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
>     -Dzookeeper.ssl.keyStore.location=/path/to/zk-keystore.jks
>     -Dzookeeper.ssl.keyStore.password=thepassword
>     -Dzookeeper.ssl.trustStore.location=/path/to/zk-truststore.jks
>     -Dzookeeper.ssl.trustStore.password=thepassword"
> --8<---------------cut here---------------end--------------->8---

Actually, it's more appropriate to add these ZooKeeper properties to the
"SOLR_ZK_CREDS_AND_ACLS" variable in bin/solr.in.sh instead of adding
them directly to "SOLR_OPTS". This is to enable the "bin/solr zk ..."
command to connect to ZooKeeper correctly.

(Solr version 8.11, external ZooKeeper version 3.6.3).

Re: Submitting form with a zone or not, depending on submit button choosed

Posted by Nicolas Bouillon <ni...@bouil.org>.
Thank you so much, it works very well.

For the record, here is my test code :

public class Ajax {
    @Property
    private String selectValue;

    @Property
    private String fieldValue;

    @Inject
    @Property
    private Block formBlock;

    @OnEvent(value = EventConstants.SELECTED, component = "submitBtn")
    public void submitFromButton() {
        reloadPage = false;
    }

    private boolean reloadPage = true;

    public Object onSuccess() {
        fieldValue = fieldValue + " / "
                + Long.toString(System.currentTimeMillis());
        if (reloadPage) {
            return formBlock;
        } else {
            return Index.class;
        }
    }

}

<html t:type="layout"
xmlns:t="http://tapestry.apache.org/schema/tapestry_5_1_0.xsd"
xmlns:p="tapestry:parameter">

	<t:zone id="updateZone">
		<t:delegate to="formBlock" />
	</t:zone>

	<t:block t:id="formBlock">
		<t:form zone="updateZone">

			<t:select model="literal:1,2,3" value="selectValue"
				onChange="this.form.fire(Tapestry.FORM_PROCESS_SUBMIT_EVENT)" />

			<t:textfield value="fieldValue" />

			<t:submit t:id="submitBtn" />
		</t:form>

	</t:block>
</html>

On Wed, 15 Sep 2010 10:21:16 -0300, "Thiago H. de Paula Figueiredo"
<th...@gmail.com> wrote:
> On Wed, 15 Sep 2010 09:54:36 -0300, Nicolas Bouillon <ni...@bouil.org>

> wrote:
> 
>> Hi,
> 
> Hi!
> 
>> Maybe the explanation of my problem was not clear enough (or there is
no
>> solution ?)...
>>
>> So i have the following template :
>>
>> <t:form>
>>   <t:select model="selectModel" value="selectValue"
>> onchange="this.form.submit()" />
> 
> This JavaScript snippet doesn't trigger an AJAX form submission. Use  
> $(formId).fire(Tapestry.FORM_PROCESS_SUBMIT_EVENT);, where formId is
> the client id of your form. That's exactly how Tapestry itself does AJAX

> form submissions.

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
For additional commands, e-mail: users-help@tapestry.apache.org


Re: Submitting form with a zone or not, depending on submit button choosed

Posted by "Thiago H. de Paula Figueiredo" <th...@gmail.com>.
On Wed, 15 Sep 2010 09:54:36 -0300, Nicolas Bouillon <ni...@bouil.org>  
wrote:

> Hi,

Hi!

> Maybe the explanation of my problem was not clear enough (or there is no
> solution ?)...
>
> So i have the following template :
>
> <t:form>
>   <t:select model="selectModel" value="selectValue"
> onchange="this.form.submit()" />

This JavaScript snippet doesn't trigger an AJAX form submission. Use  
$(formId).fire(Tapestry.FORM_PROCESS_SUBMIT_EVENT);, where formId is
the client id of your form. That's exactly how Tapestry itself does AJAX  
form submissions.

-- 
Thiago H. de Paula Figueiredo
Independent Java, Apache Tapestry 5 and Hibernate consultant, developer,  
and instructor
Owner, Ars Machina Tecnologia da Informação Ltda.
http://www.arsmachina.com.br

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
For additional commands, e-mail: users-help@tapestry.apache.org


Re: (In)Dependence on servlet

Posted by Nicola Ken Barozzi <ni...@apache.org>.
Upayavira wrote, On 14/03/2003 22.50:
>>
>>Well, actually the CLI has an additional jar needed, the CLI jar (soon
>>switch as Avalon is doing to commons-cli).
> 
> Let me know when we're ready to do so, and I'll happily send in a patch to move to 
> the commons-cli.

Even now. the sooner tha better :-)

-- 
Nicola Ken Barozzi                   nicolaken@apache.org
             - verba volant, scripta manent -
    (discussions get forgotten, just code remains)
---------------------------------------------------------------------


Re: Is this a bug for post version soap 2.3.1?

Posted by Daniel Zhang <zh...@clinicaltools.com>.
Thank you, Scott! You are right. I have an old soap.jar in 
%JAVA_HOME%/jre/lib/ext, after delete it,
it works! It seems we should not put soap.jar there. I learned a lesson.

-Daniel

Scott Nichol wrote:

>Check for an old soap.jar in %JAVA_HOME%/jre/lib/ext or 
>%JAVA_HOME%/lib/ext.  Classes there will get picked up before those 
>in your classpath.
>
>On 2 Jun 2003 at 11:07, Daniel Zhang wrote:
>
>  
>
>>Scott -
>>
>>Are you sure that you download soap-bin-2.3.1.zip from the latest 
>>nightly directory? I follow your way exactly and found no
>>getEnvelope method in the output, the following is what I did, output is 
>>in attachment.
>>
>>(1) Download soap-bin-2.3.1.zip from 
>>http://cvs.apache.org/dist/soap/nightly/2003-06-02/
>>(2) Use WINZIP to unzip to D: drive
>>(3) Ran
>>
>>D:\soap-2_3_1\lib>%JAVA_HOME%/bin/javap -classpath 
>>d:\soap-2_3_1\lib\soap.jar;%CLASSPATH% org.apache.soap.rpc.SOAPContext > 
>>output.txt
>>
>>I am using java version "1.4.1_01".Then I checked output and found NO 
>>getEnvelope method there. See attachment. Any ideas?
>>
>>-Daniel
>>
>>Scott Nichol wrote:
>>
>>    
>>
>>>The method is there.  What I did to confirm this is
>>>
>>>1. Download soap-bin-2.3.1.zip from the latest nightly directory.
>>>2. Unzipped to I:
>>>3. Ran
>>>
>>>I:\soap-2_3_1\lib>javap -classpath 
>>>i:\soap2_3_1\lib\soap.jar;%CLASSPATH% org.apache.soap.rpc.SOAPContext
>>>
>>>The output is attached.
>>>
>>>On 2 Jun 2003 at 9:32, Daniel Zhang wrote:
>>>
>>> 
>>>
>>>      
>>>
>>>>Hi, Scott Nichol -
>>>>
>>>>I download soap nightly build from 
>>>>http://cvs.apache.org/dist/soap/nightly/ and try to use a method
>>>>getEnvelope() in class org.apache.soap.rpc.SOAPContext in my soap 
>>>>program. I found its JavaDoc
>>>>lists this method but my compiler (NetBean) complained it can not find 
>>>>this method from soap.jar I
>>>>got from the same build.
>>>>
>>>>Is this a bug? Please tell me how to fix it. Thanks a lot!
>>>>
>>>>-Daniel
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>   
>>>>
>>>>        
>>>>



Re: W3C <-> Apache

Posted by Henrik Frystyk Nielsen <fr...@w3.org>.
At 23:28 2/10/98 +0000, Ben Laurie wrote:

>Amazing! You think you can dictate conditions to us. As the de facto
>setter of standards for Webservers, I say that you must follow our
>process, namely to submit patches to Apache and gain appropriate support
>for those patches. Until you do, you have not followed the due process
>for changing the way the Web works.

That would work but I was actually proposing a mechanism for working closer
together. I guess your proposal would make that harder. I don't
particularly think that is a good idea and it is not what I have heard
mentioned in today's discussion.

>Now, in case people think I'm completely serious about this: of course,
>I'm not. I'm more reasonable than W3C. OTOH, the temptation of 50k a hit
>could make me much less reasonable. But seriously: in what way is what I
>have said different to W3C's approach? If anything it should carry more
>weight. We represent more market than they do, after all.

Since when has W3C been in any market? If you read our contract then we
explicitly state that we don't compete with anyone.

Henrik
--
Henrik Frystyk Nielsen,
World Wide Web Consortium
http://www.w3.org/People/Frystyk

Re: W3C <-> Apache

Posted by Ben Laurie <be...@algroup.co.uk>.
It's late, and I have things to say about much of this tomorrow, but I
can't let this one go...

Henrik Frystyk Nielsen wrote:
> As I said, I would very much like to get more input from the Apache group -
> I do need people who can do real work. I have made the conditions clear so
> here you go - it's up to you!

Amazing! You think you can dictate conditions to us. As the de facto
setter of standards for Webservers, I say that you must follow our
process, namely to submit patches to Apache and gain appropriate support
for those patches. Until you do, you have not followed the due process
for changing the way the Web works.

Now, in case people think I'm completely serious about this: of course,
I'm not. I'm more reasonable than W3C. OTOH, the temptation of 50k a hit
could make me much less reasonable. But seriously: in what way is what I
have said different to W3C's approach? If anything it should carry more
weight. We represent more market than they do, after all.

Cheers,

Ben.

-- 
Ben Laurie            |Phone: +44 (181) 735 0686|Apache Group member
Freelance Consultant  |Fax:   +44 (181) 735 0689|http://www.apache.org
and Technical Director|Email: ben@algroup.co.uk |Apache-SSL author
A.L. Digital Ltd,     |http://www.algroup.co.uk/Apache-SSL
London, England.      |"Apache: TDG" http://www.ora.com/catalog/apache

Re: Problem with java.lang.OutOfMemoryError

Posted by Bojan Smojver <bo...@binarix.com>.
I've bumped into some problems with out of memory when using
StingCharacterIterator on IBM's JDK 1.3.0 and 1.3.1 for Linux (I don't
use other platforms, so I don't know if it's platform specific). The
piece of code that worked fine suddenly caused problems. The JVM would
run out of memory (I use TC 3.3.x). I've changed that code since to use
arrays, and all is cool. I'm mentioning this just in case you have
similar stuff in your code and similar JVM.

I would definitely try to give the JVM a bit more memory (if you have
some available), since this could be just a plain lack of memory when
the JVM needs more. If you use sessions, there is going to plenty of
stuff hanging around before the sessions die by timeout. And that could
be eating the memory. And finally and most importantly - I found that
most of the problems in my applications (I also use servlets, but with
Velocity) are caused by me, not by Tomcat. So, inspecting your code for
potential memory leaks, increased logging to figure out what the apps
were doing when the problem happens might help too.

It might be worthwhile downloading the latest released version of TC 4
too, but since I don't follow the development of TC 4, I'm not sure if
any bugs related to memory leaks have been fixed recently.

Bojan

On Fri, 2002-04-26 at 09:17, Desarrollo e Investigación wrote:
> Well, the server goes down twice a week. We are working just with servlets and in 
> the web server are connected 150 users, approximate (my english is not so good, 
> sorry).
> 
> Thanks.
> 
> Adolfo.
> 
> 
> On 26 Apr 2002 at 8:50, Bojan Smojver wrote:
> 
> > Have you tried giving your JVM a bit more memory? Does that keep it stay
> > alive for a little while longer or it makes no difference? The
> > 'sometimes goes down' happens once a day, once a week or once a month?
> > 
> > Bojan
> > 
> > On Fri, 2002-04-26 at 06:23, Desarrollo e Investigación wrote:
> > > 
> > > 	Hello. I use Apache Tomcat/4.0.2 in Linux Red Hat 7.1 and sometimes go 
> > > down with the message:
> > > ------------------------------------------------
> > > HTTP Status 500
> > > Internal Server Error
> > > ...
> > > root cause
> > > java.lang.OutOfMemoryError
> > > ------------------------------------------------
> > > I don't know the cause..
> > > 
> > > --
> > > To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
> > > For additional commands, e-mail: <ma...@jakarta.apache.org>
> > > 
> > 
> > 
> > 
> > --
> > To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
> > For additional commands, e-mail: <ma...@jakarta.apache.org>
> 
> 
> 
> --
> To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
> For additional commands, e-mail: <ma...@jakarta.apache.org>
> 



--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: [OT] London Cocoon meet

Posted by Upayavira <uv...@upaya.co.uk>.
> On 14/01/2003 17:03, "Andrew Savory" <an...@luminas.co.uk> wrote:
> 
> > Ok, looks like list lag got us ... I checked at 12:45 but hadn't got any
> > replies. So, let's try this: how about meeting up next Tuesday (21st) or
> > Wednesday (22nd), for either lunch or evening meal? (Send your preference
> > to the list, and we'll see if we can get consensus).
> 21st I can't do
> 22nd is good for me - 1pm or later if it's central.

I would be very interested, but so long as not a Wednesday lunch. Could do 
Wednesday eve.

Regards, Upayavira

---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


Re: [OT] London Cocoon meet (was Re: Back in London)

Posted by Upayavira <up...@fwbo.org>.
I should be able to make monday night.

Regards, Upayavira

> On Tue, 14 Jan 2003, Stefano Mazzocchi wrote:
> 
> > Anyway, I'm leaving for Italy on the 22nd and I can't change that.
> > So, it's either monday night or tomorrow at lunch.
> 
> Yikes! Ok, let's make it Monday 20th night then. I'd suggest meeting
> in a fairly central pub from 6 (I'll trust Thom to suggest one!) and
> then heading on for food at Satsuma or somewhere similar.
> 
> 
> Andrew.
> 
> -- 
> Andrew Savory                                Email:
> andrew@luminas.co.uk Managing Director                             
> Tel:  +44 (0)870 741 6658 Luminas Internet Applications               
>   Fax:  +44 (0)700 598 1135 This is not an official statement or
> order.    Web:    www.luminas.co.uk
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org For
> additional commands, email: cocoon-dev-help@xml.apache.org
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


Re: [OT] London Cocoon meet (was Re: Back in London)

Posted by Jeremy Quinn <je...@media.demon.co.uk>.
On Thursday, Jan 16, 2003, at 01:43 Europe/London, Stefano Mazzocchi 
wrote:

> Andrew Savory wrote:
>> On Tue, 14 Jan 2003, Stefano Mazzocchi wrote:
>>> Anyway, I'm leaving for Italy on the 22nd and I can't change that. 
>>> So,
>>> it's either monday night or tomorrow at lunch.
>> Yikes! Ok, let's make it Monday 20th night then. I'd suggest meeting 
>> in a
>> fairly central pub from 6 (I'll trust Thom to suggest one!) and then
>> heading on for food at Satsuma or somewhere similar.
>
> +1

Me too!

Which pub?

regards Jeremy


---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


Re: [OT] London Cocoon meet (was Re: Back in London)

Posted by Stefano Mazzocchi <st...@apache.org>.
Andrew Savory wrote:
> On Tue, 14 Jan 2003, Stefano Mazzocchi wrote:
> 
> 
>>Anyway, I'm leaving for Italy on the 22nd and I can't change that. So,
>>it's either monday night or tomorrow at lunch.
> 
> 
> Yikes! Ok, let's make it Monday 20th night then. I'd suggest meeting in a
> fairly central pub from 6 (I'll trust Thom to suggest one!) and then
> heading on for food at Satsuma or somewhere similar.

+1

-- 
Stefano Mazzocchi                               <st...@apache.org>
--------------------------------------------------------------------



---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


Re: [OT] London Cocoon meet (was Re: Back in London)

Posted by luke hubbard <lu...@rroom.net>.
On 17 Jan 2003 at 12:08, Stefano Mazzocchi wrote:

> > Ok, slug and lettuce from about 6pm, probably downstairs near the pool
> > table. See http://www.slugandlettuce.co.uk/directions/soho.htm
> 
> Great, thanks for picking it up Andrew :)
> 
> See you people there.
> 

Count me in too, see you there.  Luke


---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


[OT] London Cocoon meet TONIGHT

Posted by Andrew Savory <an...@luminas.co.uk>.
Just a quick reminder ... an ideal opportunity to come and buy Stefano
some birthday beers ;-)

On Fri, 17 Jan 2003, Andrew Savory wrote:

> Ok, slug and lettuce from about 6pm, probably downstairs near the pool
> table. See http://www.slugandlettuce.co.uk/directions/soho.htm

(Followed by Satsuma, http://www.london-eating.co.uk/163.htm)


Andrew.

-- 
Andrew Savory                                Email: andrew@luminas.co.uk
Managing Director                              Tel:  +44 (0)870 741 6658
Luminas Internet Applications                  Fax:  +44 (0)700 598 1135
This is not an official statement or order.    Web:    www.luminas.co.uk

---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


Re: [OT] London Cocoon meet (was Re: Back in London)

Posted by Stefano Mazzocchi <st...@apache.org>.
Andrew Savory wrote:
> On Thu, 16 Jan 2003, Thom May wrote:
> 
> 
>>+1 to date/time/food
>>-1 to me having to pick the bar, its someone else's turn ;-)
> 
> 
> Ok, slug and lettuce from about 6pm, probably downstairs near the pool
> table. See http://www.slugandlettuce.co.uk/directions/soho.htm

Great, thanks for picking it up Andrew :)

See you people there.

I'll try to bring both Pier and Fede with me :)

-- 
Stefano Mazzocchi                               <st...@apache.org>
--------------------------------------------------------------------



---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


Re: [OT] London Cocoon meet (was Re: Back in London)

Posted by Andrew Savory <an...@luminas.co.uk>.
On Thu, 16 Jan 2003, Thom May wrote:

> +1 to date/time/food
> -1 to me having to pick the bar, its someone else's turn ;-)

Ok, slug and lettuce from about 6pm, probably downstairs near the pool
table. See http://www.slugandlettuce.co.uk/directions/soho.htm


Andrew.

-- 
Andrew Savory                                Email: andrew@luminas.co.uk
Managing Director                              Tel:  +44 (0)870 741 6658
Luminas Internet Applications                  Fax:  +44 (0)700 598 1135
This is not an official statement or order.    Web:    www.luminas.co.uk

---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


Re: [OT] London Cocoon meet (was Re: Back in London)

Posted by Thom May <th...@planetarytramp.net>.
* Andrew Savory (andrew@luminas.co.uk) wrote :
> On Tue, 14 Jan 2003, Stefano Mazzocchi wrote:
> 
> > Anyway, I'm leaving for Italy on the 22nd and I can't change that. So,
> > it's either monday night or tomorrow at lunch.
> 
> Yikes! Ok, let's make it Monday 20th night then. I'd suggest meeting in a
> fairly central pub from 6 (I'll trust Thom to suggest one!) and then
> heading on for food at Satsuma or somewhere similar.
> 
+1 to date/time/food
-1 to me having to pick the bar, its someone else's turn ;-)

-Thom

---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


Re: [OT] London Cocoon meet (was Re: Back in London)

Posted by Pier Fumagalli <pi...@betaversion.org>.
On 14/1/03 6:05 pm, "Andrew Savory" <an...@luminas.co.uk> wrote:

> On Tue, 14 Jan 2003, Stefano Mazzocchi wrote:
> 
>> Anyway, I'm leaving for Italy on the 22nd and I can't change that. So,
>> it's either monday night or tomorrow at lunch.
> 
> Yikes! Ok, let's make it Monday 20th night then. I'd suggest meeting in a
> fairly central pub from 6 (I'll trust Thom to suggest one!) and then
> heading on for food at Satsuma or somewhere similar.

Satsuma's fine! :-)

    Pier


---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


Re: [OT] London Cocoon meet (was Re: Back in London)

Posted by Andrew Savory <an...@luminas.co.uk>.
On Tue, 14 Jan 2003, Stefano Mazzocchi wrote:

> Anyway, I'm leaving for Italy on the 22nd and I can't change that. So,
> it's either monday night or tomorrow at lunch.

Yikes! Ok, let's make it Monday 20th night then. I'd suggest meeting in a
fairly central pub from 6 (I'll trust Thom to suggest one!) and then
heading on for food at Satsuma or somewhere similar.


Andrew.

-- 
Andrew Savory                                Email: andrew@luminas.co.uk
Managing Director                              Tel:  +44 (0)870 741 6658
Luminas Internet Applications                  Fax:  +44 (0)700 598 1135
This is not an official statement or order.    Web:    www.luminas.co.uk

---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


Re: [OT] London Cocoon meet (was Re: Back in London)

Posted by Stefano Mazzocchi <st...@apache.org>.
Andrew Savory wrote:
> On Tue, 14 Jan 2003, Thom May wrote:
> 
> 
>>Any objections to a not-really-cocoon (but I can bluff well)  apache-er tagging
>>along?
>>I'm good both days, whether lunch or evening... Evening is perhaps better
>>but not a big issue.
> 
> 
> Sounds good to me!

Sure, Thom, come along. In fact, my first request was copied at 
party@apache.org but nobody replied from there.

Anyway, I'm leaving for Italy on the 22nd and I can't change that. So, 
it's either monday night or tomorrow at lunch.

People, sorry for somehow abusing this list but not every cocoon lurker 
is subscribed to party@apache.org (at least, I wouldn't think so)

-- 
Stefano Mazzocchi                               <st...@apache.org>
--------------------------------------------------------------------



---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


Re: [OT] London Cocoon meet (was Re: Back in London)

Posted by Andrew Savory <an...@luminas.co.uk>.
On Tue, 14 Jan 2003, Thom May wrote:

> Any objections to a not-really-cocoon (but I can bluff well)  apache-er tagging
> along?
> I'm good both days, whether lunch or evening... Evening is perhaps better
> but not a big issue.

Sounds good to me!


Andrew.

-- 
Andrew Savory                                Email: andrew@luminas.co.uk
Managing Director                              Tel:  +44 (0)870 741 6658
Luminas Internet Applications                  Fax:  +44 (0)700 598 1135
This is not an official statement or order.    Web:    www.luminas.co.uk

---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


Re: [OT] London Cocoon meet (was Re: Back in London)

Posted by Mark Leicester <ma...@metering.co.nz>.
On 14/01/2003 17:03, "Andrew Savory" <an...@luminas.co.uk> wrote:

> Ok, looks like list lag got us ... I checked at 12:45 but hadn't got any
> replies. So, let's try this: how about meeting up next Tuesday (21st) or
> Wednesday (22nd), for either lunch or evening meal? (Send your preference
> to the list, and we'll see if we can get consensus).
21st I can't do
22nd is good for me - 1pm or later if it's central.


---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


[OT] London Cocoon meet (was Re: Back in London)

Posted by Andrew Savory <an...@luminas.co.uk>.
On Tue, 14 Jan 2003, Mark Leicester wrote:

> On 14/01/2003 15:11, "Stefano Mazzocchi" <st...@apache.org> wrote:
>
> > luke hubbard wrote:
> >> On 13 Jan 2003 at 23:45, Andrew Savory wrote:
> >> I'm always up for lunch :)
> >> Name the place and I'll meet you there.
> >
> > I'll be there as well. Andrew, is David going to be with you? Jeremy and
> > other london people: want to join us for the first even cocoon
> > lunch-time gettogether (hey, we are breaking new grounds.... ) ah, no,
> > wait, Andrew, David and I met for lunch with Diana in Boston ... but
> > that wasn't an official gettogether...
> >
> > oh, well...
> Are we talking about 1pm Wednesday? If so - I'll be in, where?

Ok, looks like list lag got us ... I checked at 12:45 but hadn't got any
replies. So, let's try this: how about meeting up next Tuesday (21st) or
Wednesday (22nd), for either lunch or evening meal? (Send your preference
to the list, and we'll see if we can get consensus).

I'd suggest a meeting the following month down in Brighton?


Andrew.

-- 
Andrew Savory                                Email: andrew@luminas.co.uk
Managing Director                              Tel:  +44 (0)870 741 6658
Luminas Internet Applications                  Fax:  +44 (0)700 598 1135
This is not an official statement or order.    Web:    www.luminas.co.uk

---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


Re: Back in London

Posted by Mark Leicester <ma...@metering.co.nz>.
On 14/01/2003 15:11, "Stefano Mazzocchi" <st...@apache.org> wrote:

> luke hubbard wrote:
>> On 13 Jan 2003 at 23:45, Andrew Savory wrote:
>> I'm always up for lunch :)
>> Name the place and I'll meet you there.
> 
> I'll be there as well. Andrew, is David going to be with you? Jeremy and
> other london people: want to join us for the first even cocoon
> lunch-time gettogether (hey, we are breaking new grounds.... ) ah, no,
> wait, Andrew, David and I met for lunch with Diana in Boston ... but
> that wasn't an official gettogether...
> 
> oh, well...
Are we talking about 1pm Wednesday? If so - I'll be in, where?
Mark.


---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


Re: Back in London

Posted by Jeremy Quinn <je...@media.demon.co.uk>.
On Tuesday, Jan 14, 2003, at 15:11 Europe/London, Stefano Mazzocchi 
wrote:

> luke hubbard wrote:
>> On 13 Jan 2003 at 23:45, Andrew Savory wrote:
>>> On Mon, 13 Jan 2003, luke hubbard wrote:
>>>
>>>
>>>> On 12 Jan 2003 at 14:29, Stefano Mazzocchi wrote:
>>>>
>>>>> Anybody up for a little hackaton one of these nights?
>>>>
>>>> Sure am, would love to meet up with fellow cocooners in london.
>>>> Any ideas as to where and when?
>>>
>>> Not a hackathon, but I'm in London tomorrow on business ... anyone 
>>> want to
>>> meet up for lunch? 1pm or thereabouts, in the Waterloo area?
>>>
>>  I'm always up for lunch :) Name the place and I'll meet you there.
>
> I'll be there as well. Andrew, is David going to be with you? Jeremy 
> and other london people: want to join us for the first even cocoon 
> lunch-time gettogether (hey, we are breaking new grounds.... ) ah, no, 
> wait, Andrew, David and I met for lunch with Diana in Boston ... but 
> that wasn't an official gettogether...
>

what *day* are you all talking about ? ;)

regards Jeremy ( Quinn )


---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


Re: Back in London

Posted by Stefano Mazzocchi <st...@apache.org>.
luke hubbard wrote:
> On 13 Jan 2003 at 23:45, Andrew Savory wrote:
> 
> 
>>On Mon, 13 Jan 2003, luke hubbard wrote:
>>
>>
>>>On 12 Jan 2003 at 14:29, Stefano Mazzocchi wrote:
>>>
>>>>Anybody up for a little hackaton one of these nights?
>>>
>>>Sure am, would love to meet up with fellow cocooners in london.
>>>Any ideas as to where and when?
>>
>>Not a hackathon, but I'm in London tomorrow on business ... anyone want to
>>meet up for lunch? 1pm or thereabouts, in the Waterloo area?
>>
> 
>  
> I'm always up for lunch :) 
> Name the place and I'll meet you there.

I'll be there as well. Andrew, is David going to be with you? Jeremy and 
other london people: want to join us for the first even cocoon 
lunch-time gettogether (hey, we are breaking new grounds.... ) ah, no, 
wait, Andrew, David and I met for lunch with Diana in Boston ... but 
that wasn't an official gettogether...

oh, well...

-- 
Stefano Mazzocchi                               <st...@apache.org>
--------------------------------------------------------------------



---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org
For additional commands, email: cocoon-dev-help@xml.apache.org


Re: Bayes expiry not working?

Posted by Kai Schaetzl <ma...@conactive.com>.
Nels Lindquist wrote on Thu, 15 Apr 2004 13:23:04 -0600:

> There are some bizarre entries as well; there were about 10 records 
> with atimes of 0, and four entries with atimes in the future 
> (September and October 2004).

see my other posting, these are the ones you have to remove, plus correct the 
newest atime magic to current (magic are the first ten lines with the "summary" 
values). Then up your bayes_expiry_max_db_size to 500.000, it will then use an 
expire goal of 500.000x0.75=375.000. This would roughly expire 25.000 tokens 
(400.000-375.000). If you let it stay at the default it will eat more than half 
of your db. And now comes the trick: you have to provide reasonable values for 
the last two values in the magic, because then sa-learn will use the 
"guesstimate" expire instead of the "estimation pass" (which currently fails 
and may still fail after the correction).

>From the docs:
- if an expire has been done before, guesstimate the new atime delta based on 
the old atime delta. (new_atime_delta = old_atime_delta * old_reduction_count / 
goal)

So, estimate a reasonable value for the new_atime_delta which might 
correspondent to the desired reduction count and set old_reduction_count = 
desired new reduction count (25.000). If that db is 90 days old you could try a 
new_atime_delta of 80 days (convert yourself please :-), so everything older 
than 80 days is expired. Now you can calculate the old_atime_delta and correct 
both old_atime_delta and old_reduction_count in the dump. You don't need to 
calculate, you could estimate if you take the explanation in the docs into 
account, but it's easier to just calculate.

Hope I got it right so far, already some days ago now that I did that.

> > There are also methods for correcting the atime and magic token values and 
> > recreating a db from the corrected dump....
> 
> So should I look at doing this?
>

If you really want to try I can send you a perl script from Michael Parker 
which will recreate a Bayes DB from the corrected dump.

Kai

-- 

Kai Sch�tzl, Berlin, Germany
Get your web at Conactive Internet Services: http://www.conactive.com
IE-Center: http://ie5.de & http://msie.winware.org




RE: SOAP/Tomcat

Posted by Rob McGrath <rm...@riteaid.com>.
No forgiveness needed. 1) you didn't underestimate my familiarity as this is
almost mpossible; 2) This is exactly the type of help I needed.

So thank you. Let me double-check my setup... I appreciate your time and
quick response.

Hope I can return the favor sometime.

-----Original Message-----
From: Paul J. Caritj [mailto:pcaritj@riovia.net]
Sent: Monday, June 02, 2003 3:18 PM
To: soap-user@ws.apache.org
Subject: Re: SOAP/Tomcat

Sounds to me like you need to have the SOAP classes in the Tomcat
classpath. Forgive me if I underestimate you familiarity with Tomcat,
but Tomcat uses its own classpath for the processing of imports (not the
globally defined CLASSPATH environment variable). For classes that are
used only in this application, you would put support classes in

/tomcat_root/webapps/APPLICATIONNANE/WEB-INF/classes JAR files would be
put in /tomcat_root/webapps/APPLICATIONNANE/WEB-INF/lib

if you want to share these amongst multiple apps, they would go in
/tomcat_root/shared/classes (or/lib).

When you run it from the command line, the global classpath is used. I
am assuming that your SOAP, etc support classes are in this classpath.
Ergo, application works from the command line but not Tomcat.

Hope this helps.
-Paul Caritj

On Mon, 2003-06-02 at 14:52, Rob McGrath wrote:
> OK. This is my first time using this mail list. Forgive me if I fall short
> of the norm on appropriate info and/or standards... I'm glad I've found it
> though. :D
>
> I work for a major corporation and have been tasked with integrating a Web
> Reporting Server with our in house security.
>
> Problem is, the generation of the 3rd party software I am integrating has
> functionality we want but only in its Java "version." Were are not a Java
> shop and as of 2 months ago I had never seen Java code and didn't know
what
> a .class file was.
>
> I have since learned :D this stuff and written some simple but functional
> code. Here's what it has to do.
>
> As a user makes a request at the web server.. there is a authenticate.jsp
> page that does the out of the box security. It parses cookies and
> authenticates the user's cookie info against internal security
information.
>
> I have to take that and instead go against our in-house DB2 tables and
check
> for a valid session id. This is created when the user first goes through
our
> Portal login page which is all .Net (web, infrastructure).
>
> There is a .net webservice that returns a userid if a valid and active
> session id and environment variable are passed to it.
>
> So, I wrote a .class file using soap from apache to call this web service
(I
> learned along the way that it needed rpc enabled on the .net side in order
> to handle the call - that was fun).
>
> Now, I have a class file that works. I pass it 2 parms it give me back
what
> I want. I have altered the .jsp page to parse out the cookie I need and
pass
> the info I need.
>
> This works. I can see the output on the web page (cause I write it there
> showing the parms). From a command line, I can execute the .class file and
> get back the answer I need from the VB.Net webservice.
>
> I CAN'T GET THIS TO WORK TOGETHER INSIDE THE JSP.
>
> Forever, I have been getting an error
>
> javax.servlet.ServletException
>
> java.lang.NoClassDefFoundError at
>
> (line in my code < inside my class file) that I know is the first
execution
> of an object from soap.jar... it is the SMR object. That fails, however, I
> am sure that all subsequent reference would fail...
>
> But it compiles... and is not blowing up on the imports of the packages?
> In addition, I can call this from a command line and it works.
>
> It appears only to be a runtime failure and only from the JSP.
>
> This leads me to believe among other things... that Tomcat must have its
own
> runtime classpath that is separate from mine when I'm signed in to the
> server... that's another thing worth mentioning... I'm developing this on
> the server. I'm signed in as Administrator and the .Net web service is on
a
> physically different server. So, although this is a web server, the SOAP
> I've written is really a SOAP-Client.
>
> I've changed the JSP to write out
>
> System.getProperty( "java.class.path")
>
> And it only writes out tools.jar and bootstrap.jar
>
> Even though I've added soap.jar to both the Admin-User classpath as well
as
> the system classpath environment variables.
>
>
> I'll stop here because I feel I may have given too much useless info and
not
> enough relevant info.
>
> Any help would be SO greatly appreciated.
>
> I'd be happy to clear up anything I've said too. (Obviously) :D
>
> Thanks.
> Rob
>
>
> <html>
> <font face="Verdana" size=1><b> Disclaimer:</b> This e-mail and any
attachments thereto, is intended only for use by the addressee(s)named
herein
> and may contain legally privileged and/or confidential information. If you
are not the intended recipient of this e-mail,
> you are hereby notified any dissemination, distribution or copying of this
email, and any attachments thereto, is strictly
> prohibited.  If you receive this email in error, please immediately notify
us by replying to this message. You must
> permanently delete the original e-mail and any copies and printouts made
thereof. Delivery of this e-mail and any
> attachments to any person other than the intended recipient(s)is not
intended in any way to waive confidentiality
> or a privilege. All personal messages express views only of the sender,
which are not to be attributed to Rite Aid
> Corporation and may not be copied or distributed without this
statement.</font>
> </html>
>


<html>
<font face="Verdana" size=1><b> Disclaimer:</b> This e-mail and any attachments thereto, is intended only for use by the addressee(s)named herein
and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail,
you are hereby notified any dissemination, distribution or copying of this email, and any attachments thereto, is strictly
prohibited.  If you receive this email in error, please immediately notify us by replying to this message. You must 
permanently delete the original e-mail and any copies and printouts made thereof. Delivery of this e-mail and any 
attachments to any person other than the intended recipient(s)is not intended in any way to waive confidentiality 
or a privilege. All personal messages express views only of the sender, which are not to be attributed to Rite Aid
Corporation and may not be copied or distributed without this statement.</font>
</html>

Re: SOAP/Tomcat

Posted by "Paul J. Caritj" <pc...@riovia.net>.
Sounds to me like you need to have the SOAP classes in the Tomcat
classpath. Forgive me if I underestimate you familiarity with Tomcat,
but Tomcat uses its own classpath for the processing of imports (not the
globally defined CLASSPATH environment variable). For classes that are
used only in this application, you would put support classes in 

/tomcat_root/webapps/APPLICATIONNANE/WEB-INF/classes JAR files would be
put in /tomcat_root/webapps/APPLICATIONNANE/WEB-INF/lib

if you want to share these amongst multiple apps, they would go in
/tomcat_root/shared/classes (or/lib).

When you run it from the command line, the global classpath is used. I
am assuming that your SOAP, etc support classes are in this classpath.
Ergo, application works from the command line but not Tomcat.

Hope this helps.
-Paul Caritj

On Mon, 2003-06-02 at 14:52, Rob McGrath wrote:
> OK. This is my first time using this mail list. Forgive me if I fall short
> of the norm on appropriate info and/or standards... I'm glad I've found it
> though. :D
> 
> I work for a major corporation and have been tasked with integrating a Web
> Reporting Server with our in house security.
> 
> Problem is, the generation of the 3rd party software I am integrating has
> functionality we want but only in its Java "version." Were are not a Java
> shop and as of 2 months ago I had never seen Java code and didn't know what
> a .class file was.
> 
> I have since learned :D this stuff and written some simple but functional
> code. Here's what it has to do.
> 
> As a user makes a request at the web server.. there is a authenticate.jsp
> page that does the out of the box security. It parses cookies and
> authenticates the user's cookie info against internal security information.
> 
> I have to take that and instead go against our in-house DB2 tables and check
> for a valid session id. This is created when the user first goes through our
> Portal login page which is all .Net (web, infrastructure).
> 
> There is a .net webservice that returns a userid if a valid and active
> session id and environment variable are passed to it.
> 
> So, I wrote a .class file using soap from apache to call this web service (I
> learned along the way that it needed rpc enabled on the .net side in order
> to handle the call - that was fun).
> 
> Now, I have a class file that works. I pass it 2 parms it give me back what
> I want. I have altered the .jsp page to parse out the cookie I need and pass
> the info I need.
> 
> This works. I can see the output on the web page (cause I write it there
> showing the parms). From a command line, I can execute the .class file and
> get back the answer I need from the VB.Net webservice.
> 
> I CAN'T GET THIS TO WORK TOGETHER INSIDE THE JSP.
> 
> Forever, I have been getting an error
> 
> javax.servlet.ServletException
> 
> java.lang.NoClassDefFoundError at
> 
> (line in my code < inside my class file) that I know is the first execution
> of an object from soap.jar... it is the SMR object. That fails, however, I
> am sure that all subsequent reference would fail...
> 
> But it compiles... and is not blowing up on the imports of the packages?
> In addition, I can call this from a command line and it works.
> 
> It appears only to be a runtime failure and only from the JSP.
> 
> This leads me to believe among other things... that Tomcat must have its own
> runtime classpath that is separate from mine when I'm signed in to the
> server... that's another thing worth mentioning... I'm developing this on
> the server. I'm signed in as Administrator and the .Net web service is on a
> physically different server. So, although this is a web server, the SOAP
> I've written is really a SOAP-Client.
> 
> I've changed the JSP to write out
> 
> System.getProperty( "java.class.path")
> 
> And it only writes out tools.jar and bootstrap.jar
> 
> Even though I've added soap.jar to both the Admin-User classpath as well as
> the system classpath environment variables.
> 
> 
> I'll stop here because I feel I may have given too much useless info and not
> enough relevant info.
> 
> Any help would be SO greatly appreciated.
> 
> I'd be happy to clear up anything I've said too. (Obviously) :D
> 
> Thanks.
> Rob
> 
> 
> <html>
> <font face="Verdana" size=1><b> Disclaimer:</b> This e-mail and any attachments thereto, is intended only for use by the addressee(s)named herein
> and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail,
> you are hereby notified any dissemination, distribution or copying of this email, and any attachments thereto, is strictly
> prohibited.  If you receive this email in error, please immediately notify us by replying to this message. You must 
> permanently delete the original e-mail and any copies and printouts made thereof. Delivery of this e-mail and any 
> attachments to any person other than the intended recipient(s)is not intended in any way to waive confidentiality 
> or a privilege. All personal messages express views only of the sender, which are not to be attributed to Rite Aid
> Corporation and may not be copied or distributed without this statement.</font>
> </html>
> 


RE: SOAP/Tomcat

Posted by Rob McGrath <rm...@riteaid.com>.
Gotcha. You are right on all assumptions. Thought something like this would
be the answer; wanted to hear from an expert. Thank you sir!

-----Original Message-----
From: Scott Nichol [mailto:snicholnews@scottnichol.com]
Sent: Monday, June 23, 2003 1:49 PM
To: soap-user@ws.apache.org
Subject: RE: SOAP/Tomcat


This is an error that would come during compilation, so I presume it
is coming from a JSP you have written or modified, right?  What you
need to do is specify the full class with namespace when declaring
and instantiating variables.  For example, instead of

Vector params = new Vector();
params.addElement(new Parameter(...));

you would do

Vector params = new Vector();
params.addElement(new org.apache.soap.rpc.Parameter(...));

That way, the compiler knows which of the two Parameter classes you
want to instantiate.

On 23 Jun 2003 at 11:51, Rob McGrath wrote:

> I have a new problem related to this implementation (see previous
> email/solution for catch-up, but don't think its needed). I've got the
> server up and the applcation and server infrastructure up in development
> (where our developers are testing it, and playing around w/ new
> functionality).
>
> quick overview
> The server machine is on Win2K advanced server and is running
> tomcat,apache-soap, and 3rd-party reporting software. i needed to write a
> security app to call a .net web service to authenticate users on each
> request.
>
> Ambiguous class: org.apache.soap.rpc.Parameter and
> com.actuate.reportcast.dstruct.Parameter
>
> this is my error.
>
> when a user submits a report generation request this is the response they
> get. i have tried explicitly importing the classes so as to avoid naming
> collision, but that didn't seem to work. it obviously has to do w/ the
> classes being named the same. maybe i need to find a jar for the 3rd party
> (obviously actuate now :D) software and put it in the right folder?
>
> don't know if anyone has worked w/ this software before, or run into
> contention w/ this type of class? (soap-related)?
>
> any help/advice would be great. a final note: don't assume i know
> anything... i've been teaching myself, java,jsp,tomcat,apache-soap on the
> fly for this software implementation. $ is tight and can't get the
training!
> :(
>
> anyhow, thanks!
>
> rob
>
>
>
>
> -----Original Message-----
> From: Paul J. Caritj [mailto:pcaritj@riovia.net]
> Sent: Monday, June 02, 2003 3:26 PM
> To: soap-user@ws.apache.org
> Subject: Re: SOAP/Tomcat
>
>
> Upon reading all of your email, I see you had deduced this fact. Still,
> my email should be of some help.
>
> Sorry,
> Paul
>
> On Mon, 2003-06-02 at 14:52, Rob McGrath wrote:
> > OK. This is my first time using this mail list. Forgive me if I fall
short
> > of the norm on appropriate info and/or standards... I'm glad I've found
it
> > though. :D
> >
> > I work for a major corporation and have been tasked with integrating a
Web
> > Reporting Server with our in house security.
> >
> > Problem is, the generation of the 3rd party software I am integrating
has
> > functionality we want but only in its Java "version." Were are not a
Java
> > shop and as of 2 months ago I had never seen Java code and didn't know
> what
> > a .class file was.
> >
> > I have since learned :D this stuff and written some simple but
functional
> > code. Here's what it has to do.
> >
> > As a user makes a request at the web server.. there is a
authenticate.jsp
> > page that does the out of the box security. It parses cookies and
> > authenticates the user's cookie info against internal security
> information.
> >
> > I have to take that and instead go against our in-house DB2 tables and
> check
> > for a valid session id. This is created when the user first goes through
> our
> > Portal login page which is all .Net (web, infrastructure).
> >
> > There is a .net webservice that returns a userid if a valid and active
> > session id and environment variable are passed to it.
> >
> > So, I wrote a .class file using soap from apache to call this web
service
> (I
> > learned along the way that it needed rpc enabled on the .net side in
order
> > to handle the call - that was fun).
> >
> > Now, I have a class file that works. I pass it 2 parms it give me back
> what
> > I want. I have altered the .jsp page to parse out the cookie I need and
> pass
> > the info I need.
> >
> > This works. I can see the output on the web page (cause I write it there
> > showing the parms). From a command line, I can execute the .class file
and
> > get back the answer I need from the VB.Net webservice.
> >
> > I CAN'T GET THIS TO WORK TOGETHER INSIDE THE JSP.
> >
> > Forever, I have been getting an error
> >
> > javax.servlet.ServletException
> >
> > java.lang.NoClassDefFoundError at
> >
> > (line in my code < inside my class file) that I know is the first
> execution
> > of an object from soap.jar... it is the SMR object. That fails, however,
I
> > am sure that all subsequent reference would fail...
> >
> > But it compiles... and is not blowing up on the imports of the packages?
> > In addition, I can call this from a command line and it works.
> >
> > It appears only to be a runtime failure and only from the JSP.
> >
> > This leads me to believe among other things... that Tomcat must have its
> own
> > runtime classpath that is separate from mine when I'm signed in to the
> > server... that's another thing worth mentioning... I'm developing this
on
> > the server. I'm signed in as Administrator and the .Net web service is
on
> a
> > physically different server. So, although this is a web server, the SOAP
> > I've written is really a SOAP-Client.
> >
> > I've changed the JSP to write out
> >
> > System.getProperty( "java.class.path")
> >
> > And it only writes out tools.jar and bootstrap.jar
> >
> > Even though I've added soap.jar to both the Admin-User classpath as well
> as
> > the system classpath environment variables.
> >
> >
> > I'll stop here because I feel I may have given too much useless info and
> not
> > enough relevant info.
> >
> > Any help would be SO greatly appreciated.
> >
> > I'd be happy to clear up anything I've said too. (Obviously) :D
> >
> > Thanks.
> > Rob
> >
> >
> > <html>
> > <font face="Verdana" size=1><b> Disclaimer:</b> This e-mail and any
> attachments thereto, is intended only for use by the addressee(s)named
> herein
> > and may contain legally privileged and/or confidential information. If
you
> are not the intended recipient of this e-mail,
> > you are hereby notified any dissemination, distribution or copying of
this
> email, and any attachments thereto, is strictly
> > prohibited.  If you receive this email in error, please immediately
notify
> us by replying to this message. You must
> > permanently delete the original e-mail and any copies and printouts made
> thereof. Delivery of this e-mail and any
> > attachments to any person other than the intended recipient(s)is not
> intended in any way to waive confidentiality
> > or a privilege. All personal messages express views only of the sender,
> which are not to be attributed to Rite Aid
> > Corporation and may not be copied or distributed without this
> statement.</font>
> > </html>
> >
>
>


Scott Nichol

Do not reply directly to this e-mail address,
as it is filtered to only receive e-mail from
specific mailing lists.




__________________________________________________________________________
Disclaimer: This e-mail message is intended only for the personal use of 
the recipient(s) named above.  If you are not an intended recipient, you 
may not review, copy or distribute this message. If you have received this
communication in error, please notify us immediately by e-mail and delete 
the original message.
This e-mail expresses views only of the sender, which are not to be 
attributed to Rite Aid Corporation and may not be copied or distributed 
without this statement.

RE: SOAP/Tomcat

Posted by Scott Nichol <sn...@scottnichol.com>.
This is an error that would come during compilation, so I presume it 
is coming from a JSP you have written or modified, right?  What you 
need to do is specify the full class with namespace when declaring 
and instantiating variables.  For example, instead of

Vector params = new Vector();
params.addElement(new Parameter(...));

you would do

Vector params = new Vector();
params.addElement(new org.apache.soap.rpc.Parameter(...));

That way, the compiler knows which of the two Parameter classes you 
want to instantiate.

On 23 Jun 2003 at 11:51, Rob McGrath wrote:

> I have a new problem related to this implementation (see previous
> email/solution for catch-up, but don't think its needed). I've got the
> server up and the applcation and server infrastructure up in development
> (where our developers are testing it, and playing around w/ new
> functionality).
> 
> quick overview
> The server machine is on Win2K advanced server and is running
> tomcat,apache-soap, and 3rd-party reporting software. i needed to write a
> security app to call a .net web service to authenticate users on each
> request. 
> 
> Ambiguous class: org.apache.soap.rpc.Parameter and
> com.actuate.reportcast.dstruct.Parameter
> 
> this is my error.
> 
> when a user submits a report generation request this is the response they
> get. i have tried explicitly importing the classes so as to avoid naming
> collision, but that didn't seem to work. it obviously has to do w/ the
> classes being named the same. maybe i need to find a jar for the 3rd party
> (obviously actuate now :D) software and put it in the right folder?
> 
> don't know if anyone has worked w/ this software before, or run into
> contention w/ this type of class? (soap-related)? 
> 
> any help/advice would be great. a final note: don't assume i know
> anything... i've been teaching myself, java,jsp,tomcat,apache-soap on the
> fly for this software implementation. $ is tight and can't get the training!
> :(
> 
> anyhow, thanks!
> 
> rob
> 
> 
> 
> 
> -----Original Message-----
> From: Paul J. Caritj [mailto:pcaritj@riovia.net]
> Sent: Monday, June 02, 2003 3:26 PM
> To: soap-user@ws.apache.org
> Subject: Re: SOAP/Tomcat
> 
> 
> Upon reading all of your email, I see you had deduced this fact. Still,
> my email should be of some help.
> 
> Sorry,
> Paul
> 
> On Mon, 2003-06-02 at 14:52, Rob McGrath wrote:
> > OK. This is my first time using this mail list. Forgive me if I fall short
> > of the norm on appropriate info and/or standards... I'm glad I've found it
> > though. :D
> > 
> > I work for a major corporation and have been tasked with integrating a Web
> > Reporting Server with our in house security.
> > 
> > Problem is, the generation of the 3rd party software I am integrating has
> > functionality we want but only in its Java "version." Were are not a Java
> > shop and as of 2 months ago I had never seen Java code and didn't know
> what
> > a .class file was.
> > 
> > I have since learned :D this stuff and written some simple but functional
> > code. Here's what it has to do.
> > 
> > As a user makes a request at the web server.. there is a authenticate.jsp
> > page that does the out of the box security. It parses cookies and
> > authenticates the user's cookie info against internal security
> information.
> > 
> > I have to take that and instead go against our in-house DB2 tables and
> check
> > for a valid session id. This is created when the user first goes through
> our
> > Portal login page which is all .Net (web, infrastructure).
> > 
> > There is a .net webservice that returns a userid if a valid and active
> > session id and environment variable are passed to it.
> > 
> > So, I wrote a .class file using soap from apache to call this web service
> (I
> > learned along the way that it needed rpc enabled on the .net side in order
> > to handle the call - that was fun).
> > 
> > Now, I have a class file that works. I pass it 2 parms it give me back
> what
> > I want. I have altered the .jsp page to parse out the cookie I need and
> pass
> > the info I need.
> > 
> > This works. I can see the output on the web page (cause I write it there
> > showing the parms). From a command line, I can execute the .class file and
> > get back the answer I need from the VB.Net webservice.
> > 
> > I CAN'T GET THIS TO WORK TOGETHER INSIDE THE JSP.
> > 
> > Forever, I have been getting an error
> > 
> > javax.servlet.ServletException
> > 
> > java.lang.NoClassDefFoundError at
> > 
> > (line in my code < inside my class file) that I know is the first
> execution
> > of an object from soap.jar... it is the SMR object. That fails, however, I
> > am sure that all subsequent reference would fail...
> > 
> > But it compiles... and is not blowing up on the imports of the packages?
> > In addition, I can call this from a command line and it works.
> > 
> > It appears only to be a runtime failure and only from the JSP.
> > 
> > This leads me to believe among other things... that Tomcat must have its
> own
> > runtime classpath that is separate from mine when I'm signed in to the
> > server... that's another thing worth mentioning... I'm developing this on
> > the server. I'm signed in as Administrator and the .Net web service is on
> a
> > physically different server. So, although this is a web server, the SOAP
> > I've written is really a SOAP-Client.
> > 
> > I've changed the JSP to write out
> > 
> > System.getProperty( "java.class.path")
> > 
> > And it only writes out tools.jar and bootstrap.jar
> > 
> > Even though I've added soap.jar to both the Admin-User classpath as well
> as
> > the system classpath environment variables.
> > 
> > 
> > I'll stop here because I feel I may have given too much useless info and
> not
> > enough relevant info.
> > 
> > Any help would be SO greatly appreciated.
> > 
> > I'd be happy to clear up anything I've said too. (Obviously) :D
> > 
> > Thanks.
> > Rob
> > 
> > 
> > <html>
> > <font face="Verdana" size=1><b> Disclaimer:</b> This e-mail and any
> attachments thereto, is intended only for use by the addressee(s)named
> herein
> > and may contain legally privileged and/or confidential information. If you
> are not the intended recipient of this e-mail,
> > you are hereby notified any dissemination, distribution or copying of this
> email, and any attachments thereto, is strictly
> > prohibited.  If you receive this email in error, please immediately notify
> us by replying to this message. You must 
> > permanently delete the original e-mail and any copies and printouts made
> thereof. Delivery of this e-mail and any 
> > attachments to any person other than the intended recipient(s)is not
> intended in any way to waive confidentiality 
> > or a privilege. All personal messages express views only of the sender,
> which are not to be attributed to Rite Aid
> > Corporation and may not be copied or distributed without this
> statement.</font>
> > </html>
> > 
> 
> 


Scott Nichol

Do not reply directly to this e-mail address,
as it is filtered to only receive e-mail from
specific mailing lists.



RE: SOAP/Tomcat

Posted by Rob McGrath <rm...@riteaid.com>.
I have a new problem related to this implementation (see previous
email/solution for catch-up, but don't think its needed). I've got the
server up and the applcation and server infrastructure up in development
(where our developers are testing it, and playing around w/ new
functionality).

quick overview
The server machine is on Win2K advanced server and is running
tomcat,apache-soap, and 3rd-party reporting software. i needed to write a
security app to call a .net web service to authenticate users on each
request. 

Ambiguous class: org.apache.soap.rpc.Parameter and
com.actuate.reportcast.dstruct.Parameter

this is my error.

when a user submits a report generation request this is the response they
get. i have tried explicitly importing the classes so as to avoid naming
collision, but that didn't seem to work. it obviously has to do w/ the
classes being named the same. maybe i need to find a jar for the 3rd party
(obviously actuate now :D) software and put it in the right folder?

don't know if anyone has worked w/ this software before, or run into
contention w/ this type of class? (soap-related)? 

any help/advice would be great. a final note: don't assume i know
anything... i've been teaching myself, java,jsp,tomcat,apache-soap on the
fly for this software implementation. $ is tight and can't get the training!
:(

anyhow, thanks!

rob




-----Original Message-----
From: Paul J. Caritj [mailto:pcaritj@riovia.net]
Sent: Monday, June 02, 2003 3:26 PM
To: soap-user@ws.apache.org
Subject: Re: SOAP/Tomcat


Upon reading all of your email, I see you had deduced this fact. Still,
my email should be of some help.

Sorry,
Paul

On Mon, 2003-06-02 at 14:52, Rob McGrath wrote:
> OK. This is my first time using this mail list. Forgive me if I fall short
> of the norm on appropriate info and/or standards... I'm glad I've found it
> though. :D
> 
> I work for a major corporation and have been tasked with integrating a Web
> Reporting Server with our in house security.
> 
> Problem is, the generation of the 3rd party software I am integrating has
> functionality we want but only in its Java "version." Were are not a Java
> shop and as of 2 months ago I had never seen Java code and didn't know
what
> a .class file was.
> 
> I have since learned :D this stuff and written some simple but functional
> code. Here's what it has to do.
> 
> As a user makes a request at the web server.. there is a authenticate.jsp
> page that does the out of the box security. It parses cookies and
> authenticates the user's cookie info against internal security
information.
> 
> I have to take that and instead go against our in-house DB2 tables and
check
> for a valid session id. This is created when the user first goes through
our
> Portal login page which is all .Net (web, infrastructure).
> 
> There is a .net webservice that returns a userid if a valid and active
> session id and environment variable are passed to it.
> 
> So, I wrote a .class file using soap from apache to call this web service
(I
> learned along the way that it needed rpc enabled on the .net side in order
> to handle the call - that was fun).
> 
> Now, I have a class file that works. I pass it 2 parms it give me back
what
> I want. I have altered the .jsp page to parse out the cookie I need and
pass
> the info I need.
> 
> This works. I can see the output on the web page (cause I write it there
> showing the parms). From a command line, I can execute the .class file and
> get back the answer I need from the VB.Net webservice.
> 
> I CAN'T GET THIS TO WORK TOGETHER INSIDE THE JSP.
> 
> Forever, I have been getting an error
> 
> javax.servlet.ServletException
> 
> java.lang.NoClassDefFoundError at
> 
> (line in my code < inside my class file) that I know is the first
execution
> of an object from soap.jar... it is the SMR object. That fails, however, I
> am sure that all subsequent reference would fail...
> 
> But it compiles... and is not blowing up on the imports of the packages?
> In addition, I can call this from a command line and it works.
> 
> It appears only to be a runtime failure and only from the JSP.
> 
> This leads me to believe among other things... that Tomcat must have its
own
> runtime classpath that is separate from mine when I'm signed in to the
> server... that's another thing worth mentioning... I'm developing this on
> the server. I'm signed in as Administrator and the .Net web service is on
a
> physically different server. So, although this is a web server, the SOAP
> I've written is really a SOAP-Client.
> 
> I've changed the JSP to write out
> 
> System.getProperty( "java.class.path")
> 
> And it only writes out tools.jar and bootstrap.jar
> 
> Even though I've added soap.jar to both the Admin-User classpath as well
as
> the system classpath environment variables.
> 
> 
> I'll stop here because I feel I may have given too much useless info and
not
> enough relevant info.
> 
> Any help would be SO greatly appreciated.
> 
> I'd be happy to clear up anything I've said too. (Obviously) :D
> 
> Thanks.
> Rob
> 
> 
> <html>
> <font face="Verdana" size=1><b> Disclaimer:</b> This e-mail and any
attachments thereto, is intended only for use by the addressee(s)named
herein
> and may contain legally privileged and/or confidential information. If you
are not the intended recipient of this e-mail,
> you are hereby notified any dissemination, distribution or copying of this
email, and any attachments thereto, is strictly
> prohibited.  If you receive this email in error, please immediately notify
us by replying to this message. You must 
> permanently delete the original e-mail and any copies and printouts made
thereof. Delivery of this e-mail and any 
> attachments to any person other than the intended recipient(s)is not
intended in any way to waive confidentiality 
> or a privilege. All personal messages express views only of the sender,
which are not to be attributed to Rite Aid
> Corporation and may not be copied or distributed without this
statement.</font>
> </html>
> 


RE: SOAP/Tomcat

Posted by Rob McGrath <rm...@riteaid.com>.
Gave me the clue I needed. Is working! Thanks Paul.

-----Original Message-----
From: Paul J. Caritj [mailto:pcaritj@riovia.net]
Sent: Monday, June 02, 2003 3:26 PM
To: soap-user@ws.apache.org
Subject: Re: SOAP/Tomcat

Upon reading all of your email, I see you had deduced this fact. Still,
my email should be of some help.

Sorry,
Paul

On Mon, 2003-06-02 at 14:52, Rob McGrath wrote:
> OK. This is my first time using this mail list. Forgive me if I fall short
> of the norm on appropriate info and/or standards... I'm glad I've found it
> though. :D
>
> I work for a major corporation and have been tasked with integrating a Web
> Reporting Server with our in house security.
>
> Problem is, the generation of the 3rd party software I am integrating has
> functionality we want but only in its Java "version." Were are not a Java
> shop and as of 2 months ago I had never seen Java code and didn't know
what
> a .class file was.
>
> I have since learned :D this stuff and written some simple but functional
> code. Here's what it has to do.
>
> As a user makes a request at the web server.. there is a authenticate.jsp
> page that does the out of the box security. It parses cookies and
> authenticates the user's cookie info against internal security
information.
>
> I have to take that and instead go against our in-house DB2 tables and
check
> for a valid session id. This is created when the user first goes through
our
> Portal login page which is all .Net (web, infrastructure).
>
> There is a .net webservice that returns a userid if a valid and active
> session id and environment variable are passed to it.
>
> So, I wrote a .class file using soap from apache to call this web service
(I
> learned along the way that it needed rpc enabled on the .net side in order
> to handle the call - that was fun).
>
> Now, I have a class file that works. I pass it 2 parms it give me back
what
> I want. I have altered the .jsp page to parse out the cookie I need and
pass
> the info I need.
>
> This works. I can see the output on the web page (cause I write it there
> showing the parms). From a command line, I can execute the .class file and
> get back the answer I need from the VB.Net webservice.
>
> I CAN'T GET THIS TO WORK TOGETHER INSIDE THE JSP.
>
> Forever, I have been getting an error
>
> javax.servlet.ServletException
>
> java.lang.NoClassDefFoundError at
>
> (line in my code < inside my class file) that I know is the first
execution
> of an object from soap.jar... it is the SMR object. That fails, however, I
> am sure that all subsequent reference would fail...
>
> But it compiles... and is not blowing up on the imports of the packages?
> In addition, I can call this from a command line and it works.
>
> It appears only to be a runtime failure and only from the JSP.
>
> This leads me to believe among other things... that Tomcat must have its
own
> runtime classpath that is separate from mine when I'm signed in to the
> server... that's another thing worth mentioning... I'm developing this on
> the server. I'm signed in as Administrator and the .Net web service is on
a
> physically different server. So, although this is a web server, the SOAP
> I've written is really a SOAP-Client.
>
> I've changed the JSP to write out
>
> System.getProperty( "java.class.path")
>
> And it only writes out tools.jar and bootstrap.jar
>
> Even though I've added soap.jar to both the Admin-User classpath as well
as
> the system classpath environment variables.
>
>
> I'll stop here because I feel I may have given too much useless info and
not
> enough relevant info.
>
> Any help would be SO greatly appreciated.
>
> I'd be happy to clear up anything I've said too. (Obviously) :D
>
> Thanks.
> Rob
>
>
> <html>
> <font face="Verdana" size=1><b> Disclaimer:</b> This e-mail and any
attachments thereto, is intended only for use by the addressee(s)named
herein
> and may contain legally privileged and/or confidential information. If you
are not the intended recipient of this e-mail,
> you are hereby notified any dissemination, distribution or copying of this
email, and any attachments thereto, is strictly
> prohibited.  If you receive this email in error, please immediately notify
us by replying to this message. You must
> permanently delete the original e-mail and any copies and printouts made
thereof. Delivery of this e-mail and any
> attachments to any person other than the intended recipient(s)is not
intended in any way to waive confidentiality
> or a privilege. All personal messages express views only of the sender,
which are not to be attributed to Rite Aid
> Corporation and may not be copied or distributed without this
statement.</font>
> </html>
>


<html>
<font face="Verdana" size=1><b> Disclaimer:</b> This e-mail and any attachments thereto, is intended only for use by the addressee(s)named herein
and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail,
you are hereby notified any dissemination, distribution or copying of this email, and any attachments thereto, is strictly
prohibited.  If you receive this email in error, please immediately notify us by replying to this message. You must 
permanently delete the original e-mail and any copies and printouts made thereof. Delivery of this e-mail and any 
attachments to any person other than the intended recipient(s)is not intended in any way to waive confidentiality 
or a privilege. All personal messages express views only of the sender, which are not to be attributed to Rite Aid
Corporation and may not be copied or distributed without this statement.</font>
</html>

RE: SOAP/Tomcat

Posted by Rob McGrath <rm...@riteaid.com>.
That's not completely true. Had guessed at its existence... but, couldn't be
sure that I wasn't embarking down a fruitless path. In addition, you
provided the specific paths. Thanks - very much!

-----Original Message-----
From: Paul J. Caritj [mailto:pcaritj@riovia.net]
Sent: Monday, June 02, 2003 3:26 PM
To: soap-user@ws.apache.org
Subject: Re: SOAP/Tomcat

Upon reading all of your email, I see you had deduced this fact. Still,
my email should be of some help.

Sorry,
Paul

On Mon, 2003-06-02 at 14:52, Rob McGrath wrote:
> OK. This is my first time using this mail list. Forgive me if I fall short
> of the norm on appropriate info and/or standards... I'm glad I've found it
> though. :D
>
> I work for a major corporation and have been tasked with integrating a Web
> Reporting Server with our in house security.
>
> Problem is, the generation of the 3rd party software I am integrating has
> functionality we want but only in its Java "version." Were are not a Java
> shop and as of 2 months ago I had never seen Java code and didn't know
what
> a .class file was.
>
> I have since learned :D this stuff and written some simple but functional
> code. Here's what it has to do.
>
> As a user makes a request at the web server.. there is a authenticate.jsp
> page that does the out of the box security. It parses cookies and
> authenticates the user's cookie info against internal security
information.
>
> I have to take that and instead go against our in-house DB2 tables and
check
> for a valid session id. This is created when the user first goes through
our
> Portal login page which is all .Net (web, infrastructure).
>
> There is a .net webservice that returns a userid if a valid and active
> session id and environment variable are passed to it.
>
> So, I wrote a .class file using soap from apache to call this web service
(I
> learned along the way that it needed rpc enabled on the .net side in order
> to handle the call - that was fun).
>
> Now, I have a class file that works. I pass it 2 parms it give me back
what
> I want. I have altered the .jsp page to parse out the cookie I need and
pass
> the info I need.
>
> This works. I can see the output on the web page (cause I write it there
> showing the parms). From a command line, I can execute the .class file and
> get back the answer I need from the VB.Net webservice.
>
> I CAN'T GET THIS TO WORK TOGETHER INSIDE THE JSP.
>
> Forever, I have been getting an error
>
> javax.servlet.ServletException
>
> java.lang.NoClassDefFoundError at
>
> (line in my code < inside my class file) that I know is the first
execution
> of an object from soap.jar... it is the SMR object. That fails, however, I
> am sure that all subsequent reference would fail...
>
> But it compiles... and is not blowing up on the imports of the packages?
> In addition, I can call this from a command line and it works.
>
> It appears only to be a runtime failure and only from the JSP.
>
> This leads me to believe among other things... that Tomcat must have its
own
> runtime classpath that is separate from mine when I'm signed in to the
> server... that's another thing worth mentioning... I'm developing this on
> the server. I'm signed in as Administrator and the .Net web service is on
a
> physically different server. So, although this is a web server, the SOAP
> I've written is really a SOAP-Client.
>
> I've changed the JSP to write out
>
> System.getProperty( "java.class.path")
>
> And it only writes out tools.jar and bootstrap.jar
>
> Even though I've added soap.jar to both the Admin-User classpath as well
as
> the system classpath environment variables.
>
>
> I'll stop here because I feel I may have given too much useless info and
not
> enough relevant info.
>
> Any help would be SO greatly appreciated.
>
> I'd be happy to clear up anything I've said too. (Obviously) :D
>
> Thanks.
> Rob
>
>
> <html>
> <font face="Verdana" size=1><b> Disclaimer:</b> This e-mail and any
attachments thereto, is intended only for use by the addressee(s)named
herein
> and may contain legally privileged and/or confidential information. If you
are not the intended recipient of this e-mail,
> you are hereby notified any dissemination, distribution or copying of this
email, and any attachments thereto, is strictly
> prohibited.  If you receive this email in error, please immediately notify
us by replying to this message. You must
> permanently delete the original e-mail and any copies and printouts made
thereof. Delivery of this e-mail and any
> attachments to any person other than the intended recipient(s)is not
intended in any way to waive confidentiality
> or a privilege. All personal messages express views only of the sender,
which are not to be attributed to Rite Aid
> Corporation and may not be copied or distributed without this
statement.</font>
> </html>
>


<html>
<font face="Verdana" size=1><b> Disclaimer:</b> This e-mail and any attachments thereto, is intended only for use by the addressee(s)named herein
and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail,
you are hereby notified any dissemination, distribution or copying of this email, and any attachments thereto, is strictly
prohibited.  If you receive this email in error, please immediately notify us by replying to this message. You must 
permanently delete the original e-mail and any copies and printouts made thereof. Delivery of this e-mail and any 
attachments to any person other than the intended recipient(s)is not intended in any way to waive confidentiality 
or a privilege. All personal messages express views only of the sender, which are not to be attributed to Rite Aid
Corporation and may not be copied or distributed without this statement.</font>
</html>

Re: SOAP/Tomcat

Posted by "Paul J. Caritj" <pc...@riovia.net>.
Upon reading all of your email, I see you had deduced this fact. Still,
my email should be of some help.

Sorry,
Paul

On Mon, 2003-06-02 at 14:52, Rob McGrath wrote:
> OK. This is my first time using this mail list. Forgive me if I fall short
> of the norm on appropriate info and/or standards... I'm glad I've found it
> though. :D
> 
> I work for a major corporation and have been tasked with integrating a Web
> Reporting Server with our in house security.
> 
> Problem is, the generation of the 3rd party software I am integrating has
> functionality we want but only in its Java "version." Were are not a Java
> shop and as of 2 months ago I had never seen Java code and didn't know what
> a .class file was.
> 
> I have since learned :D this stuff and written some simple but functional
> code. Here's what it has to do.
> 
> As a user makes a request at the web server.. there is a authenticate.jsp
> page that does the out of the box security. It parses cookies and
> authenticates the user's cookie info against internal security information.
> 
> I have to take that and instead go against our in-house DB2 tables and check
> for a valid session id. This is created when the user first goes through our
> Portal login page which is all .Net (web, infrastructure).
> 
> There is a .net webservice that returns a userid if a valid and active
> session id and environment variable are passed to it.
> 
> So, I wrote a .class file using soap from apache to call this web service (I
> learned along the way that it needed rpc enabled on the .net side in order
> to handle the call - that was fun).
> 
> Now, I have a class file that works. I pass it 2 parms it give me back what
> I want. I have altered the .jsp page to parse out the cookie I need and pass
> the info I need.
> 
> This works. I can see the output on the web page (cause I write it there
> showing the parms). From a command line, I can execute the .class file and
> get back the answer I need from the VB.Net webservice.
> 
> I CAN'T GET THIS TO WORK TOGETHER INSIDE THE JSP.
> 
> Forever, I have been getting an error
> 
> javax.servlet.ServletException
> 
> java.lang.NoClassDefFoundError at
> 
> (line in my code < inside my class file) that I know is the first execution
> of an object from soap.jar... it is the SMR object. That fails, however, I
> am sure that all subsequent reference would fail...
> 
> But it compiles... and is not blowing up on the imports of the packages?
> In addition, I can call this from a command line and it works.
> 
> It appears only to be a runtime failure and only from the JSP.
> 
> This leads me to believe among other things... that Tomcat must have its own
> runtime classpath that is separate from mine when I'm signed in to the
> server... that's another thing worth mentioning... I'm developing this on
> the server. I'm signed in as Administrator and the .Net web service is on a
> physically different server. So, although this is a web server, the SOAP
> I've written is really a SOAP-Client.
> 
> I've changed the JSP to write out
> 
> System.getProperty( "java.class.path")
> 
> And it only writes out tools.jar and bootstrap.jar
> 
> Even though I've added soap.jar to both the Admin-User classpath as well as
> the system classpath environment variables.
> 
> 
> I'll stop here because I feel I may have given too much useless info and not
> enough relevant info.
> 
> Any help would be SO greatly appreciated.
> 
> I'd be happy to clear up anything I've said too. (Obviously) :D
> 
> Thanks.
> Rob
> 
> 
> <html>
> <font face="Verdana" size=1><b> Disclaimer:</b> This e-mail and any attachments thereto, is intended only for use by the addressee(s)named herein
> and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail,
> you are hereby notified any dissemination, distribution or copying of this email, and any attachments thereto, is strictly
> prohibited.  If you receive this email in error, please immediately notify us by replying to this message. You must 
> permanently delete the original e-mail and any copies and printouts made thereof. Delivery of this e-mail and any 
> attachments to any person other than the intended recipient(s)is not intended in any way to waive confidentiality 
> or a privilege. All personal messages express views only of the sender, which are not to be attributed to Rite Aid
> Corporation and may not be copied or distributed without this statement.</font>
> </html>
> 


SOAP/Tomcat

Posted by Rob McGrath <rm...@riteaid.com>.
OK. This is my first time using this mail list. Forgive me if I fall short
of the norm on appropriate info and/or standards... I'm glad I've found it
though. :D

I work for a major corporation and have been tasked with integrating a Web
Reporting Server with our in house security.

Problem is, the generation of the 3rd party software I am integrating has
functionality we want but only in its Java "version." Were are not a Java
shop and as of 2 months ago I had never seen Java code and didn't know what
a .class file was.

I have since learned :D this stuff and written some simple but functional
code. Here's what it has to do.

As a user makes a request at the web server.. there is a authenticate.jsp
page that does the out of the box security. It parses cookies and
authenticates the user's cookie info against internal security information.

I have to take that and instead go against our in-house DB2 tables and check
for a valid session id. This is created when the user first goes through our
Portal login page which is all .Net (web, infrastructure).

There is a .net webservice that returns a userid if a valid and active
session id and environment variable are passed to it.

So, I wrote a .class file using soap from apache to call this web service (I
learned along the way that it needed rpc enabled on the .net side in order
to handle the call - that was fun).

Now, I have a class file that works. I pass it 2 parms it give me back what
I want. I have altered the .jsp page to parse out the cookie I need and pass
the info I need.

This works. I can see the output on the web page (cause I write it there
showing the parms). From a command line, I can execute the .class file and
get back the answer I need from the VB.Net webservice.

I CAN'T GET THIS TO WORK TOGETHER INSIDE THE JSP.

Forever, I have been getting an error

javax.servlet.ServletException

java.lang.NoClassDefFoundError at

(line in my code < inside my class file) that I know is the first execution
of an object from soap.jar... it is the SMR object. That fails, however, I
am sure that all subsequent reference would fail...

But it compiles... and is not blowing up on the imports of the packages?
In addition, I can call this from a command line and it works.

It appears only to be a runtime failure and only from the JSP.

This leads me to believe among other things... that Tomcat must have its own
runtime classpath that is separate from mine when I'm signed in to the
server... that's another thing worth mentioning... I'm developing this on
the server. I'm signed in as Administrator and the .Net web service is on a
physically different server. So, although this is a web server, the SOAP
I've written is really a SOAP-Client.

I've changed the JSP to write out

System.getProperty( "java.class.path")

And it only writes out tools.jar and bootstrap.jar

Even though I've added soap.jar to both the Admin-User classpath as well as
the system classpath environment variables.


I'll stop here because I feel I may have given too much useless info and not
enough relevant info.

Any help would be SO greatly appreciated.

I'd be happy to clear up anything I've said too. (Obviously) :D

Thanks.
Rob


<html>
<font face="Verdana" size=1><b> Disclaimer:</b> This e-mail and any attachments thereto, is intended only for use by the addressee(s)named herein
and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail,
you are hereby notified any dissemination, distribution or copying of this email, and any attachments thereto, is strictly
prohibited.  If you receive this email in error, please immediately notify us by replying to this message. You must 
permanently delete the original e-mail and any copies and printouts made thereof. Delivery of this e-mail and any 
attachments to any person other than the intended recipient(s)is not intended in any way to waive confidentiality 
or a privilege. All personal messages express views only of the sender, which are not to be attributed to Rite Aid
Corporation and may not be copied or distributed without this statement.</font>
</html>

RE: Is this a bug for post version soap 2.3.1?

Posted by Scott Nichol <sn...@scottnichol.com>.
Try soap-user-unsubscribe@ws.apache.org.  While postings seem to make 
it to the right place using xml.apache.org, commands (such as 
unsubscribe) do not.

On 2 Jun 2003 at 11:57, Manish Sangani wrote:

> how to unsubscribe from soap-user list. I am trying this email address :
> soap-user-unsubscribe@xml.apache.org which seeems to be not working.
> 
> let me know
> 
> thanks
> 
> 
> 
> 
> 


Scott Nichol

Do not reply directly to this e-mail address,
as it is filtered to only receive e-mail from
specific mailing lists.



Re: Shameless plug

Posted by Anne Thomas Manes <an...@manes.net>.
I have sections on choreography/orchestration, reliable messaging, and interoperability issues (not testing, though). It doesn't go into a great deal of detail on asynchrony or process flow.

Anne
  ----- Original Message ----- 
  From: Vishal Shah 
  To: soap-user@ws.apache.org 
  Sent: Wednesday, June 04, 2003 12:36 PM
  Subject: Re: Shameless plug


  Hi,

  Does your book address issues such as
  Asynchrony 
  Process Flow
  Interoperability testing

  Regards,
  VS


------------------------------------------------------------------------------
  Do you Yahoo!?
  Free online calendar with sync to Outlook(TM).

Re: Shameless plug

Posted by Vishal Shah <sh...@yahoo.com>.
Hi,
 
Does your book address issues such as
Asynchrony 
Process Flow
Interoperability testing
 
Regards,
VS


---------------------------------
Do you Yahoo!?
Free online calendar with sync to Outlook(TM).

Re: Is this a bug for post version soap 2.3.1?

Posted by Anne Thomas Manes <an...@manes.net>.
The address has changed. It is now:

soap-user-unsubscribe@ws.apache.org


----- Original Message ----- 
From: "Manish Sangani" <ms...@viecorefsd.com>
To: <so...@ws.apache.org>
Sent: Monday, June 02, 2003 11:57 AM
Subject: RE: Is this a bug for post version soap 2.3.1?


> how to unsubscribe from soap-user list. I am trying this email address :
> soap-user-unsubscribe@xml.apache.org which seeems to be not working.
> 
> let me know
> 
> thanks
> 
> 
> 
> 


RE: Is this a bug for post version soap 2.3.1?

Posted by Manish Sangani <ms...@viecorefsd.com>.
how to unsubscribe from soap-user list. I am trying this email address :
soap-user-unsubscribe@xml.apache.org which seeems to be not working.

let me know

thanks





Re: Is this a bug for post version soap 2.3.1?

Posted by Scott Nichol <sn...@scottnichol.com>.
I highly recommend no one put soap.jar (or activation.jar or 
xerces.jar, etc.) in lib/ext.  That directory is for Sun's 
extensions, not jars we are too lazy to add to our classpath!

On 2 Jun 2003 at 11:33, Scott Nichol wrote:

> Check for an old soap.jar in %JAVA_HOME%/jre/lib/ext or 
> %JAVA_HOME%/lib/ext.  Classes there will get picked up before those 
> in your classpath.
> 
> On 2 Jun 2003 at 11:07, Daniel Zhang wrote:
> 
> > Scott -
> > 
> > Are you sure that you download soap-bin-2.3.1.zip from the latest 
> > nightly directory? I follow your way exactly and found no
> > getEnvelope method in the output, the following is what I did, output is 
> > in attachment.
> > 
> > (1) Download soap-bin-2.3.1.zip from 
> > http://cvs.apache.org/dist/soap/nightly/2003-06-02/
> > (2) Use WINZIP to unzip to D: drive
> > (3) Ran
> > 
> > D:\soap-2_3_1\lib>%JAVA_HOME%/bin/javap -classpath 
> > d:\soap-2_3_1\lib\soap.jar;%CLASSPATH% org.apache.soap.rpc.SOAPContext > 
> > output.txt
> > 
> > I am using java version "1.4.1_01".Then I checked output and found NO 
> > getEnvelope method there. See attachment. Any ideas?
> > 
> > -Daniel
> > 
> > Scott Nichol wrote:
> > 
> > >The method is there.  What I did to confirm this is
> > >
> > >1. Download soap-bin-2.3.1.zip from the latest nightly directory.
> > >2. Unzipped to I:
> > >3. Ran
> > >
> > >I:\soap-2_3_1\lib>javap -classpath 
> > >i:\soap2_3_1\lib\soap.jar;%CLASSPATH% org.apache.soap.rpc.SOAPContext
> > >
> > >The output is attached.
> > >
> > >On 2 Jun 2003 at 9:32, Daniel Zhang wrote:
> > >
> > >  
> > >
> > >>Hi, Scott Nichol -
> > >>
> > >>I download soap nightly build from 
> > >>http://cvs.apache.org/dist/soap/nightly/ and try to use a method
> > >>getEnvelope() in class org.apache.soap.rpc.SOAPContext in my soap 
> > >>program. I found its JavaDoc
> > >>lists this method but my compiler (NetBean) complained it can not find 
> > >>this method from soap.jar I
> > >>got from the same build.
> > >>
> > >>Is this a bug? Please tell me how to fix it. Thanks a lot!
> > >>
> > >>-Daniel
> > >>
> > >>
> > >>
> > >>
> > >>
> > >>    
> > >>
> > >
> > >
> > >Scott Nichol
> > >
> > >Do not reply directly to this e-mail address,
> > >as it is filtered to only receive e-mail from
> > >specific mailing lists.
> > >
> > >
> > >  
> > >
> > 
> > 
> 
> 
> Scott Nichol
> 
> Do not reply directly to this e-mail address,
> as it is filtered to only receive e-mail from
> specific mailing lists.
> 
> 
> 


Scott Nichol

Do not reply directly to this e-mail address,
as it is filtered to only receive e-mail from
specific mailing lists.



Re: Is this a bug for post version soap 2.3.1?

Posted by Scott Nichol <sn...@scottnichol.com>.
Check for an old soap.jar in %JAVA_HOME%/jre/lib/ext or 
%JAVA_HOME%/lib/ext.  Classes there will get picked up before those 
in your classpath.

On 2 Jun 2003 at 11:07, Daniel Zhang wrote:

> Scott -
> 
> Are you sure that you download soap-bin-2.3.1.zip from the latest 
> nightly directory? I follow your way exactly and found no
> getEnvelope method in the output, the following is what I did, output is 
> in attachment.
> 
> (1) Download soap-bin-2.3.1.zip from 
> http://cvs.apache.org/dist/soap/nightly/2003-06-02/
> (2) Use WINZIP to unzip to D: drive
> (3) Ran
> 
> D:\soap-2_3_1\lib>%JAVA_HOME%/bin/javap -classpath 
> d:\soap-2_3_1\lib\soap.jar;%CLASSPATH% org.apache.soap.rpc.SOAPContext > 
> output.txt
> 
> I am using java version "1.4.1_01".Then I checked output and found NO 
> getEnvelope method there. See attachment. Any ideas?
> 
> -Daniel
> 
> Scott Nichol wrote:
> 
> >The method is there.  What I did to confirm this is
> >
> >1. Download soap-bin-2.3.1.zip from the latest nightly directory.
> >2. Unzipped to I:
> >3. Ran
> >
> >I:\soap-2_3_1\lib>javap -classpath 
> >i:\soap2_3_1\lib\soap.jar;%CLASSPATH% org.apache.soap.rpc.SOAPContext
> >
> >The output is attached.
> >
> >On 2 Jun 2003 at 9:32, Daniel Zhang wrote:
> >
> >  
> >
> >>Hi, Scott Nichol -
> >>
> >>I download soap nightly build from 
> >>http://cvs.apache.org/dist/soap/nightly/ and try to use a method
> >>getEnvelope() in class org.apache.soap.rpc.SOAPContext in my soap 
> >>program. I found its JavaDoc
> >>lists this method but my compiler (NetBean) complained it can not find 
> >>this method from soap.jar I
> >>got from the same build.
> >>
> >>Is this a bug? Please tell me how to fix it. Thanks a lot!
> >>
> >>-Daniel
> >>
> >>
> >>
> >>
> >>
> >>    
> >>
> >
> >
> >Scott Nichol
> >
> >Do not reply directly to this e-mail address,
> >as it is filtered to only receive e-mail from
> >specific mailing lists.
> >
> >
> >  
> >
> 
> 


Scott Nichol

Do not reply directly to this e-mail address,
as it is filtered to only receive e-mail from
specific mailing lists.



Re: Is this a bug for post version soap 2.3.1?

Posted by Daniel Zhang <zh...@clinicaltools.com>.
Scott -

Are you sure that you download soap-bin-2.3.1.zip from the latest 
nightly directory? I follow your way exactly and found no
getEnvelope method in the output, the following is what I did, output is 
in attachment.

(1) Download soap-bin-2.3.1.zip from 
http://cvs.apache.org/dist/soap/nightly/2003-06-02/
(2) Use WINZIP to unzip to D: drive
(3) Ran

D:\soap-2_3_1\lib>%JAVA_HOME%/bin/javap -classpath 
d:\soap-2_3_1\lib\soap.jar;%CLASSPATH% org.apache.soap.rpc.SOAPContext > 
output.txt

I am using java version "1.4.1_01".Then I checked output and found NO 
getEnvelope method there. See attachment. Any ideas?

-Daniel

Scott Nichol wrote:

>The method is there.  What I did to confirm this is
>
>1. Download soap-bin-2.3.1.zip from the latest nightly directory.
>2. Unzipped to I:
>3. Ran
>
>I:\soap-2_3_1\lib>javap -classpath 
>i:\soap2_3_1\lib\soap.jar;%CLASSPATH% org.apache.soap.rpc.SOAPContext
>
>The output is attached.
>
>On 2 Jun 2003 at 9:32, Daniel Zhang wrote:
>
>  
>
>>Hi, Scott Nichol -
>>
>>I download soap nightly build from 
>>http://cvs.apache.org/dist/soap/nightly/ and try to use a method
>>getEnvelope() in class org.apache.soap.rpc.SOAPContext in my soap 
>>program. I found its JavaDoc
>>lists this method but my compiler (NetBean) complained it can not find 
>>this method from soap.jar I
>>got from the same build.
>>
>>Is this a bug? Please tell me how to fix it. Thanks a lot!
>>
>>-Daniel
>>
>>
>>
>>
>>
>>    
>>
>
>
>Scott Nichol
>
>Do not reply directly to this e-mail address,
>as it is filtered to only receive e-mail from
>specific mailing lists.
>
>
>  
>


Re: Is this a bug for post version soap 2.3.1?

Posted by Scott Nichol <sn...@scottnichol.com>.
The method is there.  What I did to confirm this is

1. Download soap-bin-2.3.1.zip from the latest nightly directory.
2. Unzipped to I:
3. Ran

I:\soap-2_3_1\lib>javap -classpath 
i:\soap2_3_1\lib\soap.jar;%CLASSPATH% org.apache.soap.rpc.SOAPContext

The output is attached.

On 2 Jun 2003 at 9:32, Daniel Zhang wrote:

> Hi, Scott Nichol -
> 
> I download soap nightly build from 
> http://cvs.apache.org/dist/soap/nightly/ and try to use a method
> getEnvelope() in class org.apache.soap.rpc.SOAPContext in my soap 
> program. I found its JavaDoc
> lists this method but my compiler (NetBean) complained it can not find 
> this method from soap.jar I
> got from the same build.
> 
> Is this a bug? Please tell me how to fix it. Thanks a lot!
> 
> -Daniel
> 
> 
> 
> 
> 


Scott Nichol

Do not reply directly to this e-mail address,
as it is filtered to only receive e-mail from
specific mailing lists.



Is this a bug for post version soap 2.3.1?

Posted by Daniel Zhang <zh...@clinicaltools.com>.
Hi, Scott Nichol -

I download soap nightly build from 
http://cvs.apache.org/dist/soap/nightly/ and try to use a method
getEnvelope() in class org.apache.soap.rpc.SOAPContext in my soap 
program. I found its JavaDoc
lists this method but my compiler (NetBean) complained it can not find 
this method from soap.jar I
got from the same build.

Is this a bug? Please tell me how to fix it. Thanks a lot!

-Daniel





Re: Registration Opens for ApacheCon 2003

Posted by Lars Eilebrecht <la...@apache.org>.
According to Hans Kind:

> Do all the rooms in the Alexis provide high speed Internet access?

No, but we will be providing a wireless network for the conference.
We will be trying to make it available in as many rooms as possible,
but it is unlikely that we can do it for all rooms.

ciao...
-- 
Lars Eilebrecht
lars@apache.org

Re: Unable to commit volatile index

Posted by James Abley <ja...@gmail.com>.
On 7 September 2010 08:06, zeeman <ze...@zeeman.de> wrote:

> Hi!
>
> > There is one index at repository level and multiple indexes at each
> > workspace level. The one you are looking at is repository index and it
> > doesn't change much. I don't know what exactly is stored in repository
> > index. My guess is it will hold indexes for global info like node types
> or
> > possibly root node?
> >> as additional info, i took a deeper look on the mentioned
> >> file/directory:
> >> > org.apache.jackrabbit.core.query.lucene.MultiIndex] -[-] (Timer-1:)
> >> Unable to commit volatile index
> >> > java.io.IOException: Cannot delete
> X:\jackrabbit\repository\index\indexes
>
> I was able to reproduce this issue by locking a indexes file (I tested
> with default workspace indexes because it's easier to cause changes in
> the JCR there).
> I just locked it by a simple file lock and caused a change in the JCR.
> The issue was logged in a loop while the lock on the file exists.
> As soon as the lock was released jackrabbit could update the indexes
> file and the logging of that stacktrace stoped.
>
> Though we fixed it by restarting the server (not only JBoss, the whole
> machine) as we had no idea how to monitor open files on a windows
> machine (something like lsof...)
>

Been a _long_ time since I worked on Windows, but IIRC, this was always very
helpful for stuff like that.

http://technet.microsoft.com/en-us/sysinternals/default.aspx

Cheers,

James

Re: Unable to commit volatile index

Posted by zeeman <ze...@zeeman.de>.
Hi!

> There is one index at repository level and multiple indexes at each
> workspace level. The one you are looking at is repository index and it
> doesn't change much. I don't know what exactly is stored in repository
> index. My guess is it will hold indexes for global info like node types or
> possibly root node?
>> as additional info, i took a deeper look on the mentioned
>> file/directory:
>> > org.apache.jackrabbit.core.query.lucene.MultiIndex] -[-] (Timer-1:)
>> Unable to commit volatile index
>> > java.io.IOException: Cannot delete X:\jackrabbit\repository\index\indexes

I was able to reproduce this issue by locking a indexes file (I tested
with default workspace indexes because it's easier to cause changes in
the JCR there).
I just locked it by a simple file lock and caused a change in the JCR.
The issue was logged in a loop while the lock on the file exists.
As soon as the lock was released jackrabbit could update the indexes
file and the logging of that stacktrace stoped.

Though we fixed it by restarting the server (not only JBoss, the whole
machine) as we had no idea how to monitor open files on a windows
machine (something like lsof...)


Thanks for the help,
Sebastian


Re: Unable to commit volatile index

Posted by Narendra Sharma <na...@gmail.com>.
There is one index at repository level and multiple indexes at each
workspace level. The one you are looking at is repository index and it
doesn't change much. I don't know what exactly is stored in repository
index. My guess is it will hold indexes for global info like node types or
possibly root node?

Thanks,
-Naren

On Mon, Sep 6, 2010 at 3:53 AM, zeeman <ze...@zeeman.de> wrote:

> Hi all,
>
> as additional info, i took a deeper look on the mentioned
> file/directory:
> > org.apache.jackrabbit.core.query.lucene.MultiIndex] -[-] (Timer-1:)
> Unable to commit volatile index
> > java.io.IOException: Cannot delete X:\jackrabbit\repository\index\indexes
>
>
> In  X:\jackrabbit\repository\index I see the following files:
> [Name]            [DateModified]       [Size]
> indexes           A month ago          doesn't change
> indexes.new       now (changing)       doesn't change
> redo.log          now (changing)       increases
>
> redo.log looks in 99% like
> -1 STR
> -1 COM
> -1 STR
> -1 COM
>
> Any ideas what's going on?
>
>
> Kind regards,
> Sebastian
>
>

Re: Timestamp Frustrations

Posted by tr...@clayst.com.
On 3 Jun 2005 Ben Collins-Sussman wrote:

> 'svn merge', on the other hand, is just shortcut for making local 
> edits.  It's no different than opening files in your editor and 
> tweaking them.  And in that case, you definitely don't want the 
> timestamps reset.  At a minimum, it would make 'svn status' quite 
> slow at detecting edited files.  It would be a bit unnatural. 

Well OK, then maybe I don't understand merge.  I gather I would be far 
from the first :-).

I have two main branches of development:

	project/live
	project/dev

Each one has a couple of subdirectories below it.  I keep working 
copies of both.  Sometimes I make bug fixes to the live/ working copy; 
these I usually commit and publish right away.  Most changes are in the 
dev/ working copy, and I don't commit these very often.

The last time live/ was copied to dev/ was r3.  I recently finished a 
bunch of changes to dev/ and was ready to make them live.  I committed 
them from the dev/ working copy.  Then from the live/ working copy I 
did the svn log --top-on-copy to discover that r3 was what I wanted.  
Then I did this, also from the live/ working copy:

	svn update
	svn merge -r 3:HEAD file:///h:/svnrepos/project/dev/
	[resolved conflicts]
	svn commit -m "Merge XXX modifications into live branch"

Is this correct?  I took it from the "Merging a Whole Branch to 
Another" section of the book.

For timestamps, what I really want here is for the files in the live/ 
wc to have the same timestamps as they had in the dev/ wc -- i.e. the 
last time the file was actually modified (putting aside any mods to 
resolve conflicts).  So unless I'm using merge incorrectly, I think 
this is a case where you would want merge to preserve the timestamps, 
if they were being preserved at all.

--
Tom




---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Merge Help

Posted by tr...@clayst.com.
I am reposting this from another thread because it got no response, and 
I'd be interested in some feedback as to whether this is the correct 
use of merge ...  

> 'svn merge', on the other hand, is just shortcut for making local
> edits.  It's no different than opening files in your editor and
> tweaking them.  And in that case, you definitely don't want the
> timestamps reset.  At a minimum, it would make 'svn status' quite
> slow at detecting edited files.  It would be a bit unnatural. 

Well OK, then maybe I don't understand merge.  I gather I would be far 
from the first :-).  

I have two main branches of development:  

	project/live
	project/dev

Each one has a couple of subdirectories below it.  I keep working 
copies of both.  Sometimes I make bug fixes to the live/ working copy; 
these I usually commit and publish right away.  Most changes are in the 
dev/ working copy, and I don't commit these very often.  

The last time live/ was copied to dev/ was r3.  I recently finished a 
bunch of changes to dev/ and was ready to make them live.  I committed 
them from the dev/ working copy.  Then from the live/ working copy I 
did the svn log --top-on-copy to discover that r3 was what I wanted.  
Then I did this, also from the live/ working copy:  

	svn update
	svn merge -r 3:HEAD file:///h:/svnrepos/project/dev/
	[resolved conflicts]
	svn commit -m "Merge XXX modifications into live branch"

Is this correct?  I took it from the "Merging a Whole Branch to 
Another" section of the book.  

Re the earlier discussion on timestamps, what I really want here is for 
the files in the live/ wc to have the same timestamps as they had in 
the dev/ wc -- i.e. the last time the file was actually modified 
(putting aside any mods to resolve conflicts).  So unless I'm using 
merge incorrectly, I think this is a case where you would want merge to 
preserve the timestamps, if they were being preserved at all.  

--
Tom




---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Timestamp Frustrations

Posted by Ben Collins-Sussman <su...@collab.net>.
On Jun 3, 2005, at 9:11 AM, trlists@clayst.com wrote:
>
> Incidentally, why doesn't use-commit-times affect svn merge?

Merges are fundamentally different than checkouts or updates.

Checkouts and updates modify the 'base' versions of files in your  
working copy:  not just the working files, but the underlying  
administrative data.

'svn merge', on the other hand, is just shortcut for making local  
edits.  It's no different than opening files in your editor and  
tweaking them.  And in that case, you definitely don't want the  
timestamps reset.  At a minimum, it would make 'svn status' quite  
slow at detecting edited files.  It would be a bit unnatural.



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Timestamp Frustrations

Posted by Thomas Moschny <mo...@ipd.uni-karlsruhe.de>.
On Friday 03 June 2005 17:15, trlists@clayst.com wrote:
> > I'm not sure I understand;  is there some difference between "commit
> > changes" and "copy files from one machine to another?"  Either you're
> > ready to broadcast work to other computers, or you're not.
>
> The two computers are both used by me.  One is a desktop and one is a
> laptop.  I move back and forth between them often, depending on where,
> how, and when I'm working, and whether I need to take work out of the
> house.  I tend to use commit when I'm done with something, which is not
> at all synchronized with switching machines.

Maybe you could use a 'private' devel branch to lower the (psychological) 
barrier of doing a check-in?

Then, when some part of your work is finished, you could finally propagate 
(merge) it into the main branch.

Regards,
Thomas

Re: script runs from command line and as CGI, not under ePerl/mod_perl

Posted by Todd Finney <tf...@boygenius.com>.
At 03:39 PM 7/6/2007 -0400, Perrin Harkins wrote:
>   It seems to be a problem between
>Business::Shipping and the perl that mod_perl is using.  It may
>actually be an issue with an SSL socket library that
>Business::Shipping uses.

We've all had this conversation at some point in our careers:

"Hey, my X is [broken/fixed/on fire/lodged in a bus full of orphans]."

"What did you change?"

"Golly, I didn't change anything, honest."

"Sure."

Well, I don't know what I changed, but something's different.   I attempted 
to recompile Apache and mod_perl with debugging flags turned on, failed 
miserably, and went back and recompiled both per my standard 
procedure.  After doing this, I found that the test handler mysteriously 
started working.

So, I don't know what just happened there, but thank you for your help.  I 
really appreciate it.

Todd


Re: script runs from command line and as CGI, not under ePerl/mod_perl

Posted by Perrin Harkins <pe...@elem.com>.
On 7/5/07, Todd Finney <tf...@boygenius.com> wrote:
> With due respect, Perrin, I disagree with the (common, unfortunately)
> belief that a lack of an active release cycle indicates that a package is
> somehow unsuitable.  Sometimes, that just means that it's done.

Sure, but the real problem is that no one else is using it anymore, so
it's hard to get help with it.

> If I comment out the line '$rate_request->submit() or die
> $rate_request->user_error();', I get the hello world output as
> expected.  If I do not comment out that line, I get no output, and my logs
> say this:
>
> [Thu Jul  5 15:38:06 2007] [notice] Apache/1.3.37 (Unix) PHP/4.4.7
> mod_perl/1.30 configured -- resuming normal operations
> [Thu Jul  5 15:38:06 2007] [notice] Accept mutex: sysvsem (Default: sysvsem)
> [Thu Jul  5 15:38:13 2007] [notice] child pid 1013 exit signal Segmentation
> fault (11)

If you can get a stacktrace, someone may be able to help you find
where the problem is.  My shot in the dark is that you either upgraded
perl and didn't recompile something, or mod_perl is not using the same
perl that ePerl is.  It seems to be a problem between
Business::Shipping and the perl that mod_perl is using.  It may
actually be an issue with an SSL socket library that
Business::Shipping uses.

- Perrin

Re: script runs from command line and as CGI, not under ePerl/mod_perl

Posted by Todd Finney <tf...@boygenius.com>.
At 01:38 AM 7/5/2007 -0400, Perrin Harkins wrote:
>Well, the root of your problem is that you're using ePerl.  I haven't
>seen that one in a long time, and I doubt it gets any support releases
>these days.  I also doubt anyone on the list at this point has ever
>used it.  It's just too old.  Time to look at some newer tools, if you
>want to use something that the community can help you with.

With due respect, Perrin, I disagree with the (common, unfortunately) 
belief that a lack of an active release cycle indicates that a package is 
somehow unsuitable.  Sometimes, that just means that it's done.

That said, removing ePerl from the equation is trivial:

[http://heavy.boygenius.com/temp/TestShippingHandler.pm.txt]

[http://heavy.boygenius.com/hello/world]

If I comment out the line '$rate_request->submit() or die 
$rate_request->user_error();', I get the hello world output as 
expected.  If I do not comment out that line, I get no output, and my logs 
say this:

[Thu Jul  5 15:38:06 2007] [notice] Apache/1.3.37 (Unix) PHP/4.4.7 
mod_perl/1.30 configured -- resuming normal operations
[Thu Jul  5 15:38:06 2007] [notice] Accept mutex: sysvsem (Default: sysvsem)
[Thu Jul  5 15:38:13 2007] [notice] child pid 1013 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:14 2007] [notice] child pid 1014 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:15 2007] [notice] child pid 1015 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:17 2007] [notice] child pid 1017 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:18 2007] [notice] child pid 1018 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:20 2007] [notice] child pid 1019 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:21 2007] [notice] child pid 1020 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:22 2007] [notice] child pid 1021 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:24 2007] [notice] child pid 1022 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:25 2007] [notice] child pid 1044 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:27 2007] [notice] child pid 1046 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:28 2007] [notice] child pid 1047 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:30 2007] [notice] child pid 1050 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:31 2007] [notice] child pid 1051 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:32 2007] [notice] child pid 1052 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:34 2007] [notice] child pid 1053 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:35 2007] [notice] child pid 1065 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:36 2007] [notice] child pid 1009 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:38 2007] [notice] child pid 1069 exit signal Segmentation 
fault (11)
[Thu Jul  5 15:38:39 2007] [notice] child pid 1070 exit signal Segmentation 
fault (11)

Until the browser finally stops.

>I don't remember much about how ePerl works, but it's clear you aren't
>sending any headers or generating valid HTML.  That may not be the
>problem though.

ePerl handles the headers without needing to specify them, and the little 
bit of HMTL at the bottom of the script should be plenty.

>Why don't you try taking things out of this until it works?  The last
>thing you take out is the culprit.

I tried that already, using a different script:

[http://heavy.boygenius.com/temp/test2.cgi]
[http://heavy.boygenius.com/temp/test2_cgi.txt]

[http://heavy.boygenius.com/temp/test2.phtml]
[http://heavy.boygenius.com/temp/test2_phtml.txt]

If I comment out '$rate_request->submit() or die 
$rate_request->user_error();', it returns an empty page immediately.

My interpretation of the problem is that it seems to occur when 
Business::Shipping goes out over the wire from inside a mod_perl 
environment.  However, that's probably wrong, because grabbing external 
pages from inside mod_perl seems to work just fine in other cases:

[http://heavy.boygenius.com/temp/test_pagegrab.phtml]
[http://heavy.boygenius.com/temp/test_pagegrab_phtml.txt]

Thusly, I'm stumped.

thanks again,
Todd



Re: JMeter 1.9 released

Posted by Jordi Salvat i Alabart <js...@atg.com>.

mstover1@apache.org wrote:
> I will make a source release in the next few days - the build file doesn't appear 
> set up to make a source tar, 

ant src_dist

should do it -- though it's long since I've not tested it. I'll try it now.

> so I have to write that.  In previous releases, 
> source was included in all dists, but that made for a large download, so it was 
> taken out.
> 
> Also, the japanese translation is now in my hands, and I want to make that 
> available, probably as a patch.


> -Mike
> 
> On 8 Aug 2003 at 17:45, Tetsuya Kitahata wrote:
> 
> 
>>http://jakarta.apache.org/site/news.html#20030807.1
>>
>>Congratulations!
>>
>>By the way, where can I find the source version of JMeter 1.9?
>>
>>-- Tetsuya (tetsuya@apache.org)
>>
>>On Thu, 07 Aug 2003 09:40:08 -0400
>>(Subject: JMeter 1.9 released)
>>mstover1@apache.org wrote:
>>
>>
>>>The voting, while far from complete, was unanimous, and JMeter 1.9 is 
>>>released.  The links from jmeter's home pages (jakarta.apache.org/jmeter) 
>>>have been updated to reflect this.  Enjoy!
>>>
>>>Now, let the development fun begin.
>>>
>>>I'll make a source release in the next few days as well and put it up.
>>>
>>>--
>>>Michael Stover
>>>mstover1@apache.org
>>>Yahoo IM: mstover_ya
>>>ICQ: 152975688
>>>AIM: mstover777
>>
>>
>>
>>---------------------------------------------------------------------
>>To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
>>For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
>>
> 
> 
> 
> 
> 
> --
> Michael Stover
> mstover1@apache.org
> Yahoo IM: mstover_ya
> ICQ: 152975688
> AIM: mstover777
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 
> 

-- 
Salut,

Jordi.


Re: One service with operations in different namespaces

Posted by Anne Thomas Manes <an...@manes.net>.
Marcin,

You define the schemas in the <types> section of the WSDL document. For 
example:

<wsdl:definitions name='twoNamepsaces'
     targetNamespace='urn:twonamespaces/wsdl'
     xmlns:soap='http://schemas.xmlsoap.org/wsdl/soap/'
     xmlns:wsdl='http://schemas.xmlsoap.org/wsdl/'
     xmlns:ns1='urn:twoNamespaces/ns1'
     xmlns:ns2='urn:twoNamespaces/ns2'
     xmlns:tns='urn:twoNamespaces/wsdl'>

     <wsdl:types>
         <xsd:schema targetNamespace='urn:twoNamespaces/ns1'
             xmlns:xsd='http://www.w3.org/2001/XMLSchema'>
             <xsd:element name='op1' type='string'/>
         </xsd:schema>
         <xsd:schema targetNamespace='urn:twoNamespaces/ns2'
             xmlns:xsd='http://www.w3.org/2001/XMLSchema'>
             <xsd:element name='op2' type='string'/>
         </xsd:schema>
     </wsdl:types>

     <wsdl:message name='op1'>
         <wsdl:part name='body' element='ns1:op1'/>
     </wsdl:message>
     <wsdl:message name='op2'>
         <wsdl:part name='body' element='ns2:op2'/>
     </wsdl:message>

     <wsdl:portType name='interface'>
         <wsdl:operation name='op1'>
             <wsdl:input message='tns:op1'/>
         </wsdl:operation>
         <wsdl:operation name='op2'>
             <wsdl:input message='tns:op2'/>
         </wsdl:operation>
     </wsdl:portType>

     <wsdl:binding name='interfaceSOAP' type='tns:interface'>
         <soap:binding
             transport='http://schemas.xmlsoap.org/soap/http'
             style='document'/>
         <wsdl:operation name='op1'>
             <soap:operation
               soapAction='op1'
               style='document'/>
             <wsdl:input>
                 <soap:body use='literal'/>
             </wsdl:input>
         </wsdl:operation>
         <wsdl:operation name='op2'>
             <soap:operation
               soapAction='op2'
               style='document'/>
             <wsdl:input>
                 <soap:body use='literal'/>
             </wsdl:input>
         </wsdl:operation>
     </wsdl:binding>


At 08:58 AM 9/25/2003 +0200, you wrote:
>>Are you using rpc/encoded or doc/literal?
>
>I am using message service.
>
>>If you are using doc/literal, then the namespace of the top-level element 
>>in your SOAP body is determined by the targetNamespace attribute of the 
>><schema> that defines the input element. If you want to use different 
>>namespaces, you need to define each message structure in a different schema.
>
>How can I tell Axis in deployment, where the XML Schema is? Could you 
>meybe give simple example?
>
>Regards,
>Marcin
>
>>Regards,
>>Anne
>>At 09:16 PM 9/20/2003 +0200, you wrote:
>>
>>>Hi.
>>>I would like to make a message service that has two operations in 
>>>different namespaces. So one would be invoked by messages <ns1:op1/> and 
>>><ns2:op2/>.
>>>
>>>Unfortunately if I try to do it, I get an error, that Axis can't match 
>>>an operation. When I set a namespace attribute in deploying I can run 
>>>only one of those messages.
>>>
>>>Is it possible to do it? Maybe it even isn't possible to describe in WSDL?
>
>--
>-------------------------------------------------------------
>                       Marcin Okraszewski
>okrasz@o2.pl                                       GG: 341942
>okrasz@vlo.ids.gda.pl          PGP: www.okrasz.prv.pl/pgp.asc
>-------------------------------------------------------------
>



Re: Request reply

Posted by Bruno Dusausoy <bd...@yp5.be>.
On Fri, 05 Nov 2010 14:54:22 +0800, Willem Jiang
<wi...@gmail.com> wrote:
> I just checked the code of Camel there is a big difference between
> the inOut("jms:xmlOrders") and
> to("jms:xmlOrders?exchangePattern=InOut")
> 
> inOut("jms:xmlOrders") will set the exchange pattern of the exchange
> to be InOut, and send the exchange to the endpoint.
> to("jms:xmlOrders?exchangePattern=InOut") will just send the exchange
> to the endpoint, it will not change the exchange pattern of the
> exchange. The exchangePattern only takes effect on the jms endpoint
> consumer when it creates the Exchange.
> 
> The issue of your first route is bean(ValidatorBean.class,
> "validate") will set the exchange pattern to InOut, and jms:valid
> endpoint will expect the response from the "stream:out" endpoint which
> will block the message which need to send back to "jms:xmlOrders".
> 
> Remove the bean() part can remove the block, so you can get what you need.
> 

Thanks a lot for the explanation !
-- 
Bruno Dusausoy
YP5 Software
--
Pensez environnement : limitez l'impression de ce mail.
Please don't print this e-mail unless you really need to.

Re: Request reply

Posted by Willem Jiang <wi...@gmail.com>.
I just checked the code of Camel there is a big difference between the 
inOut("jms:xmlOrders") and to("jms:xmlOrders?exchangePattern=InOut")

inOut("jms:xmlOrders") will set the exchange pattern of the exchange to 
be InOut, and send the exchange to the endpoint.
to("jms:xmlOrders?exchangePattern=InOut") will just send the exchange to 
the endpoint, it will not change the exchange pattern of the exchange. 
The exchangePattern only takes effect on the jms endpoint consumer when 
it creates the Exchange.

The issue of your first route is bean(ValidatorBean.class, "validate") 
will set the exchange pattern to InOut, and jms:valid endpoint will 
expect the response from the "stream:out" endpoint which will block the 
message which need to send back to "jms:xmlOrders".

Remove the bean() part can remove the block, so you can get what you need.

On 11/4/10 8:33 PM, Bruno Dusausoy wrote:
> On Thu, 04 Nov 2010 11:23:46 +0100, Bruno Dusausoy<bd...@yp5.be>
> wrote:
>> Hi,
>>
> [...]
>>
>> So my question is : "is there a difference between setting the
>> exchangePattern option and using the inOut() method ?"
>>
> [...]
>
> Ok, I think I've tightened the scope of the problem.
> This code doesn't work (as explained before) :
>
> from("file:src/data?noop=true").to("jms:incomingOrders");
> from("jms:incomingOrders")
>      .inOut("jms:xmlOrders");
> // ValidatorBean.validate() always return the boolean value "true".
> from("jms:xmlOrders")
>      .bean(ValidatorBean.class, "validate")
>      .to("jms:valid");
> from("jms:valid").to("stream:out");
>
>
> If I use exactly the same, only replacing
>
> from("jms:incomingOrders")
>      .inOut("jms:xmlOrders");
>
> with
>
> from("jms:incomingOrders")
>      .to("jms:xmlOrders?exchangePattern=InOut");
>
> It works as expected, resulting in displaying "true" to the console.
>
> *But*
>
> When removing the last route and the .to("jms:valid"), like this :
>
> from("file:src/data?noop=true").to("jms:incomingOrders");
> from("jms:incomingOrders")
>      .inOut("jms:xmlOrders");
> from("jms:xmlOrders")
>      .bean(ValidatorBean.class, "validate");
>
> it always work, either by using inOut() or the to() with
> "exchangePattern=InOut" option.
>
> Can someone tell me why is it this way ?
>
> Regards.


-- 
Willem
----------------------------------
FuseSource
Web: http://www.fusesource.com
Blog:    http://willemjiang.blogspot.com (English)
          http://jnn.javaeye.com (Chinese)
Twitter: willemjiang

Re: [users@httpd] Virtual host causing problems with default server

Posted by Robert Moskowitz <rg...@htt-consult.com>.
At 03:47 PM 7/8/2003 +0200, Patrick Donker wrote:
>read http://httpd.apache.org/docs-2.0/vhosts/

I have.  Many times.

I can't figure out what I am missing.  And I think I must be missing 
something from my Main Server section to cause a URL NOT to the virtual 
host to still go to the virtual host.


>Robert Moskowitz wrote:
>
>>My 2.0.44 server is running on NT.
>>
>>When I add the lines:
>>
>><VirtualHost abc.org>
>>     ServerAdmin abc@abc.org
>>     DocumentRoot "D:/Pages-abc"
>>     ServerName abc.org:80
>>     ErrorLog logs/abc.org-error.log
>>     CustomLog logs/abc.org-access.log common
>></VirtualHost>
>>
>>Access to my default server get redirected to the virtual host  :(
>>
>>Section 2 of my conf does have:
>>
>>ServerName host.xyz.com:80
>>DocumentRoot "D:/Pages"
>>
>>What am I missing???
>>
>>
>>Illegitimi non Carborundum
>>
>>
>>---------------------------------------------------------------------
>>The official User-To-User support forum of the Apache HTTP Server Project.
>>See <URL:http://httpd.apache.org/userslist.html> for more info.
>>To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
>>   "   from the digest: users-digest-unsubscribe@httpd.apache.org
>>For additional commands, e-mail: users-help@httpd.apache.org
>>
>
>
>---------------------------------------------------------------------
>The official User-To-User support forum of the Apache HTTP Server Project.
>See <URL:http://httpd.apache.org/userslist.html> for more info.
>To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
>   "   from the digest: users-digest-unsubscribe@httpd.apache.org
>For additional commands, e-mail: users-help@httpd.apache.org


---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: post-update hook?

Posted by Jan Hendrik <ja...@bigfoot.com>.
Concerning Re: post-update hook?
Jeremy Pereira wrote on 7 Jun 2004, 15:13, at least in part:

> Sorry, didn't realise you were using TortoiseSVN.  I was assuming
> command line client.

Nevermind and thank you for your suggestion anyway.  Not least 
TortoiseSVN was the reason I wanted to do this by post-update 
hook so folks do not have to remember further steps.  But if that's 
not possible then there will be another way ... Just takes collecting 
ideas to find the best solution finally.

Jan Hendrik

---------------------------------------
In memoriam Ronald Reagan:

     The government's view of the economy
     can be summed up in a few short phrases:
     If it moves, tax it. If it keeps moving, regulate it.
     And if it stops moving, subsidize it.
                -- Ronald Reagan


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: post-update hook?

Posted by Jan Hendrik <ja...@bigfoot.com>.
Concerning Re: post-update hook?
John Peacock wrote on 7 Jun 2004, 7:10, at least in part:

> Jan Hendrik wrote:
> > update on a working copy used exclusively by the local web dev
> > server/for uploads to the live web server (as John Peacock just
> > suggests in another thread).
> 
> To amplify a little what I suggested on the other thread, having an
> automatic update for a webserver is great for a development server
> (not so much for production sites).

Indeed, too many flaws can go online too quick! :-)

> But there is no reason that the production server can't also have its
> own working copy; just make that one a manual update.  If you are into
> that sort of thing, you can even make specific release tags and then
> switch the production server WC to the next release tag (instead of
> using the HEAD revision exclusively).  You have lots of options...

Well, that's a bit beyond our current scope as the production server 
is hosted outside ... don't have direct access ... and remotely 
updating a working copy over dial-in ... surely not the best of 
things.  But one of the many options will finally do about what I 
want, as always.

Jan Hendrik

---------------------------------------
In memoriam Ronald Reagan:

     You and I have a rendezvous with destiny.
     We will preserve for our children -
     this, the last best hope of man on earth,
     or we will sentence them to take the first step
     into a thousand years of darkness. 
     If we fail, at least let our children and
     our children's children say of us 
     we justified our brief moment here.
     We did all that could be done.
                -- Ronald Reagan, A Time For Choosing Speech, 1964


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: post-update hook?

Posted by Jeremy Pereira <je...@ntlworld.com>.
Sorry, didn't realise you were using TortoiseSVN.  I was assuming 
command line client.

On Jun 7, 2004, at 11:26, Jan Hendrik wrote:

> Concerning Re: post-update hook?
> Jeremy Pereira wrote on 6 Jun 2004, 21:38, at least in part:
>
>>
>> On Jun 4, 2004, at 15:34, Jan Hendrik wrote:
>>
>>> As I see now what I wanted to do is impossible since the hook is
>>> executed on the svn server and has no idea from where both commit
>>> and update request origin.  So a hook script may be able to update a
>>> specific working copy or webserver, but not any working copy.  Such
>>> a hook would have to be placed into the working copies.  Sorry for
>>> the noise.
>>
>> How about creating a client side script that does the svn update and
>> then regenerates the generated files?
>
> It would have to be run either manually or by a scheduled task, but
> it would not run automatically on svn update or even updating with
> TortoiseSVN, the usual way here, would it?  Yes, that would work,
> though to avoid conflicts going unnoticed in a scheduled task I
> would rather leave the svn update out.  It's not the kind of
> automatization I'd like to have but better than nothing.  Or doing the
> update on a working copy used exclusively by the local web dev
> server/for uploads to the live web server (as John Peacock just
> suggests in another thread).  Well, some ideas to think about ...
> Thanks for all the suggetsions.
>
> Jan Hendrik
>
> ---------------------------------------
> Freedom quote:
>
>      As a man is said to have a right to his property,
>      he may be equally said to have a property in his rights.
>      Where an excess of power prevails,
>      property of no sort is duly respected.
>      No man is safe in his opinions, his person,
>      his faculties, or his possessions.
>                 -- James Madison, National Gazzette, 1792
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
> For additional commands, e-mail: users-help@subversion.tigris.org
>
>
>
--
Jeremy Pereira                             Tel: +44 (0)1252 401035
Senior Consultant                       Mobile: +44 (0)7884 265457
Axcelia Ltd                                Fax: +44 (0)1252 336934
http://www.axcelia.com           mailto:jeremy.pereira@axcelia.com


______________________________________________________________________
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
______________________________________________________________________

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: post-update hook?

Posted by John Peacock <jp...@rowman.com>.
Jan Hendrik wrote:
> update on a working copy used exclusively by the local web dev 
> server/for uploads to the live web server (as John Peacock just 
> suggests in another thread).

To amplify a little what I suggested on the other thread, having an automatic 
update for a webserver is great for a development server (not so much for 
production sites).

But there is no reason that the production server can't also have its own 
working copy; just make that one a manual update.  If you are into that sort of 
thing, you can even make specific release tags and then switch the production 
server WC to the next release tag (instead of using the HEAD revision 
exclusively).  You have lots of options...

John

-- 
John Peacock
Director of Information Research and Technology
Rowman & Littlefield Publishing Group
4720 Boston Way
Lanham, MD 20706
301-459-3366 x.5010
fax 301-429-5747

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: post-update hook?

Posted by Jan Hendrik <ja...@bigfoot.com>.
Concerning Re: post-update hook?
Jeremy Pereira wrote on 6 Jun 2004, 21:38, at least in part:

> 
> On Jun 4, 2004, at 15:34, Jan Hendrik wrote:
> 
> > As I see now what I wanted to do is impossible since the hook is
> > executed on the svn server and has no idea from where both commit
> > and update request origin.  So a hook script may be able to update a
> > specific working copy or webserver, but not any working copy.  Such
> > a hook would have to be placed into the working copies.  Sorry for
> > the noise.
> 
> How about creating a client side script that does the svn update and
> then regenerates the generated files?

It would have to be run either manually or by a scheduled task, but 
it would not run automatically on svn update or even updating with 
TortoiseSVN, the usual way here, would it?  Yes, that would work, 
though to avoid conflicts going unnoticed in a scheduled task I 
would rather leave the svn update out.  It's not the kind of 
automatization I'd like to have but better than nothing.  Or doing the 
update on a working copy used exclusively by the local web dev 
server/for uploads to the live web server (as John Peacock just 
suggests in another thread).  Well, some ideas to think about ... 
Thanks for all the suggetsions.

Jan Hendrik

---------------------------------------
Freedom quote:

     As a man is said to have a right to his property,
     he may be equally said to have a property in his rights.
     Where an excess of power prevails,
     property of no sort is duly respected.
     No man is safe in his opinions, his person,
     his faculties, or his possessions.
                -- James Madison, National Gazzette, 1792


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: post-update hook?

Posted by Jeremy Pereira <je...@ntlworld.com>.
On Jun 4, 2004, at 15:34, Jan Hendrik wrote:

> As I see now what I wanted to do is impossible since the hook is
> executed on the svn server and has no idea from where both
> commit and update request origin.  So a hook script may be able
> to update a specific working copy or webserver, but not any
> working copy.  Such a hook would have to be placed into the
> working copies.  Sorry for the noise.
>
>

How about creating a client side script that does the svn update and 
then regenerates the generated files?

--
Jeremy Pereira
http://www.jeremyp.net


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: use-commit-times = effectless?

Posted by Jan Hendrik <ja...@bigfoot.com>.
Concerning Re: use-commit-times = effectless?
Philip Martin wrote on 6 Dec 2003, 18:29, at least in part:

> > Nope, it is the same with both add/commit & update per file:///
> > access as just tested. Files committed some days ago got the
> > timestamp "Dec 6, 2003, 11:00", that is the actual moment of
> > updating.
> 
> Are you claiming 0.33 doesn't work?  Please provide step-by-step
> instructions to reproduce the problem.

Phil,
at least with 0.34 use-commit-times worked. If I find some time I'll 
wonder why 0.32 had the option in conf without the function ... <g>

Best regards

Jan Hendrik

---------------------------------------
Freedom quote:

     Another current catch-phrase is the complaint
     that the nations of the world are divided
     into 'haves' and the 'have-nots.'
     Observe that the 'haves' are those who have freedom,
     and that it is freedom that the 'have-nots' have not.
                -- Ayn Rand


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: use-commit-times = effectless?

Posted by Erik Huelsmann <e....@gmx.net>.
You need the client version 0.33; with our policy of
server-and-client-compatibility being compatible at one minor version apart, you can upgrade your
client without upgrading your server. This would not affect deltification
(unless you use the file:/// protocol).

bye,

Erik.


> Concerning Re: use-commit-times = effectless?
> Philip Martin wrote on 6 Dec 2003, 18:29, at least in part:
> 
> > >> > Is the setting "use-commit-times = yes" effectless in Windows?
> > >> 
> > >> > SVN .32.1, TSVN .21, Apache 20.48.
> > >> 
> > >> See the CHANGES file, you need 0.33.
> > >
> > > Nope, it is the same with both add/commit & update per file:///
> > > access as just tested. Files committed some days ago got the
> > > timestamp "Dec 6, 2003, 11:00", that is the actual moment of
> > > updating.
> > 
> > Are you claiming 0.33 doesn't work?  Please provide step-by-step
> > instructions to reproduce the problem.
> 
> Phil, I think we talk of different relations: it looks you meant that 
> with 0.33 use-commit-times should work, while I related "you need 
> 0.33" to me having 0.32.1 with Apache 2.0.48 instead of 2.0.47. 
> (Actually I never saw CHANGES file 0.33 since I stopped upgrading 
> because of manualization of deltafication.) If this was an issue 
> solved with 0.33, please drop this.
 

-- 
+++ GMX - die erste Adresse für Mail, Message, More +++
Neu: Preissenkung für MMS und FreeMMS! http://www.gmx.net



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: use-commit-times = effectless?

Posted by Jan Hendrik <ja...@bigfoot.com>.
Concerning Re: use-commit-times = effectless?
Philip Martin wrote on 6 Dec 2003, 18:29, at least in part:

> >> > Is the setting "use-commit-times = yes" effectless in Windows?
> >> 
> >> > SVN .32.1, TSVN .21, Apache 20.48.
> >> 
> >> See the CHANGES file, you need 0.33.
> >
> > Nope, it is the same with both add/commit & update per file:///
> > access as just tested. Files committed some days ago got the
> > timestamp "Dec 6, 2003, 11:00", that is the actual moment of
> > updating.
> 
> Are you claiming 0.33 doesn't work?  Please provide step-by-step
> instructions to reproduce the problem.

Phil, I think we talk of different relations: it looks you meant that 
with 0.33 use-commit-times should work, while I related "you need 
0.33" to me having 0.32.1 with Apache 2.0.48 instead of 2.0.47. 
(Actually I never saw CHANGES file 0.33 since I stopped upgrading 
because of manualization of deltafication.) If this was an issue 
solved with 0.33, please drop this.

However, in case we have not talked apples and oranges, here are 
the steps (repos built with 0.27, current version 0.32.1, 
repository and both working copies in question are all on the same 
machine and have file access to repos):

1) create new file test.htm in working copy 1

2) commit wc1/test.htm

3) update wc2

4) test.htm has timestamp of update, not of commit time in wc1

Instead of creating a new file one can do some comits from wc1, 
wait some minutes or a day (just to make timestamp differences 
significantly clear), and update wc2. All files updated to wc2 will 
have the timestamp of the update, not that of their last commit.

Best regards

Jan Hendrik



> 
> -- 
> Philip Martin
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org For
> additional commands, e-mail: users-help@subversion.tigris.org
> 


---------------------------------------
Freedom quote:

     Freedom is a fragile thing
     and is never more than one generation away from extinction.
     It is not ours by inheritance;
     it must be fought for and defended constantly by each generation,
     for it comes only once to a people.
     Those who have known freedom, and then lost it,
     have never known it again.
                -- Ronald Reagan


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: use-commit-times = effectless?

Posted by Jan Hendrik <ja...@bigfoot.com>.
Concerning Re: use-commit-times = effectless?
Philip Martin wrote on 6 Dec 2003, 18:29, at least in part:

> >> > Is the setting "use-commit-times = yes" effectless in Windows?
> >> 
> >> > SVN .32.1, TSVN .21, Apache 20.48.
> >> 
> >> See the CHANGES file, you need 0.33.
> >
Finally found the .33 changes list. However, I never installed .33, 
but the conf file of .32.1 (a clean install) already offered this option. 
Strange, but does not matter. I'll check it again once I install a 
newer version of SVN - what may not be this year anymore since 
the HDD of my machine is sent in this week under warranty and I 
have to do with a disk to small for SVN and repositories. And my 
machine is still the one where repos corruption happens less 
frequently, so I removed SVN from all others in our small peer2peer 
LAN.

Best regards

Jan Hendrik

---------------------------------------
Freedom quote:

     We've gone astray from first principles.
     We've lost sight of the rule that individual freedom and ingenuity
     are at the very core of everything that we've accomplished.
     Government's first duty is to protect the people, not run their lives.
                -- Ronald Reagan


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: use-commit-times = effectless?

Posted by Philip Martin <ph...@codematters.co.uk>.
"Jan Hendrik" <ja...@bigfoot.com> writes:

> Philip Martin wrote on 5 Dec 2003, 19:28, at least in part:
>
>> "Jan Hendrik" <ja...@bigfoot.com> writes:
>> 
>> > Is the setting "use-commit-times = yes" effectless in Windows?
>> 
>> > SVN .32.1, TSVN .21, Apache 20.48.
>> 
>> See the CHANGES file, you need 0.33.
>
> Nope, it is the same with both add/commit & update per file:/// 
> access as just tested. Files committed some days ago got the 
> timestamp "Dec 6, 2003, 11:00", that is the actual moment of 
> updating.

Are you claiming 0.33 doesn't work?  Please provide step-by-step
instructions to reproduce the problem.

-- 
Philip Martin

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Apache::Session::MySQL, light/heavy proxy, wedging

Posted by Todd Finney <tf...@boygenius.com>.
At 05:52 PM 1/30/2007 -0500, Jonathan Vanasco wrote:
>my gut feeling is that you're having an error that deals with valid /
>invalid / new session ids.  but it could really be anything - maybe
>you're having connectivity issues with apache::session (timing out ?
>are you using apache::dbi ? etc )

The bigger, fancier test case checks all of that.

----
        my %session = ();
         $log->info("AccessManagement::Session: tying session");
         eval {
             tie %session, 'Apache::Session::MySQL',$session_id, {
                 DataSource=>'dbi:mysql:boygenius',
                 UserName=>'nobody',
                 Password=>'XXXXXXXX',
                 LockDataSource=>'dbi:mysql:boygenius',
                 LockUserName=>'nobody',
                 LockPassword=>'XXXXXXXX',
             };
         };
         if ($@) {
             $log->info('AccessManagement::Session: '.$@);
             undef $session_id;
             tie %session, 'Apache::Session::MySQL',$session_id, {
                 DataSource=>'dbi:mysql:boygenius',
                 UserName=>'nobody',
                 Password=>'XXXXXXXX',
                 LockDataSource=>'dbi:mysql:boygenius',
                 LockUserName=>'nobody',
                 LockPassword=>'XXXXXXXX',
             };
             $log->info('AccessManagement::Session: New Session Created 
Successfully.');
         }
         $log->info('AccessManagement::Session: session id: '.$session_id);

----

It wedges at "AccessManagement::Session: tying session"

What I posted was a simplified test case, in order to demonstrate the 
problem in as few lines of code as possible.  I even used as a base a code 
section that should be "known good", as it appears in the perldoc for the 
manual.  I thought that I made this pretty clear, I'll try harder next time.

Problems such as Apache::Session timing out are unlikely to be the culprit, 
as the problem is reliably reproducible under narrow, specific 
circumstances as outlined in my original message.  Sessions created under 
the successful cases never fail, and sessions created under the failure 
cases never succeed.  Removing the single line in question causes all 
requests to succeed.

>just as an example, in my session class i do:

Yea, thanks.

>anyways, you shouldn't be writing new code for 1.3.x  unless you're
>stuck supporting a legacy app.

And thanks again.






Re: ERR_CLIENT_ABORT

Posted by Leif Hedstrom <zw...@apache.org>.
On 12/08/2010 05:26 AM, a@test123.ru wrote:
>
> I will continue to use ATS with fixed timeouts and post results later. By now:
> Pros:
> + HTTP11 support (no errors which I saw when I was using squid)
> + Compact binary log
> ++ One big cache-db file instead of lot of small files
> +++ Caching algorithm works just fine out-of-the-box. I mean I don't need to tune "max_object_size", increase TTL for images and CSS files. Looks like ATS handles this things automatically.

Great! yes, all these features you mention I 100% agree are major "wins" 
with ATS.

> Cons:
> - A lot of config files
> - A lot of irrelevant config options. Why should I care? I want simple proxy. Today is XXI century, do you think HTTP caching is still so complex? :)

We are painfully aware of this, and it will be addressed further after 
the stable v3.0 release. It will take time though, so hopefully people 
will bear with us (what makes it particularly complicated is that we 
wish to improve/build more on our clustering features, which requires 
configs to be shareable and dynamically reloadable).

That much said, one thing I've been thinking of is to provide either a 
small 'wizard' (command line thing, answer a few questions, and it'll 
set up your configs). Or, ship with a small number of config sets (e.g. 
"Forward proxy", "Reverse proxy" and "Transparent proxy"), which would 
then unpack the required files, perhaps with minimal defaults.

One problem today with eliminating a lot of the "defaults" from 
records.config is that without them, it's difficult to find / know what 
settings you can tweak. As in your case, the timeouts are obviously very 
important. But even so, I think we can provide smaller defaults for 
these three config sets, which might turn records.config into maybe 40 
configs, instead of 400+.

> - Need to compile it. Figuring out that I have to disable "fd_events" (or smth) on debian system takes time. Binary distribution is what users want, IMHO.

Yes. The hope is that once we have a more stable release, we'll get the 
big distros to pick it up. This is also high on our wish list, maybe we 
should get cracking on this some more, and get some volunteer work. I 
know "ming_zym" has a .spec file he's been using, which he has 
contributed, but we'll need to finish up these things to be at a quality 
where distros would accept them. And we'll obviously need a .deb for 
Ubuntu / Debian.

> - Lack of documentation.

Hmmm, not sure I agree with this one. There's a *lot* of documentation:

     http://trafficserver.apache.org/docs.html


These aren't 100% up to date (there are features missing, and there are 
features we don't support), but I think they are both excellent starting 
points. Miles and Igor (and a few others) are also actively working on 
getting docs ready for a v3.0 release.

> Missing features:
> * Proxy authorization. Now the only way is IP-based auth (bypass.config). It is useful to have Basic HTTP auth for authorizing clients. Of course, LDAP integration is welcome.

Noted. The hope is that we can write a plugin for this at some point.

> IMHO, ATS 2.1.4-unstable is stable enough for basic usage. It deserves to bear the name "beta" :) I remember that guy who uses 44Gb for in-memory caching and expects bad response time, but this is HUGE installation. As basic caching proxy, ATS works just fine. However, config files have to be simplified. I am thinking about writing wiki page "ATS installation as forward proxy", but I think today this is waste of time. You are reviewing configs, right? I mean that "log2 ->  log" change. This is right direction.


I'm very to happy to hear that you are having a good experience! And 
thank you so much for the feedback, please keep it coming! The only way 
we can improve and make this a top-notch, production ready software,  is 
with user input like yours.

Cheers!

-- leif


Re: ERR_CLIENT_ABORT

Posted by a...@test123.ru.
Thanx for explanations! I changed timeouts from 0 to 3600. Internal clients are trusted, so I think it's ok.
My overall experience is good. Very good. I use ATS as transparent caching proxy and it works like a charm. Last year I tried squid, and I canceled it. Complex pages load slower than w/o squid (at least, when you see it by own eyes). Also squid causes errors when:
- uploading big files (video)
- POSTing to HTTP11 servers, such as Redmine project management tool.

I will continue to use ATS with fixed timeouts and post results later. By now:
Pros:
+ HTTP11 support (no errors which I saw when I was using squid)
+ Compact binary log
++ One big cache-db file instead of lot of small files
+++ Caching algorithm works just fine out-of-the-box. I mean I don't need to tune "max_object_size", increase TTL for images and CSS files. Looks like ATS handles this things automatically.
Cons:
- A lot of config files
- A lot of irrelevant config options. Why should I care? I want simple proxy. Today is XXI century, do you think HTTP caching is still so complex? :)
- Need to compile it. Figuring out that I have to disable "fd_events" (or smth) on debian system takes time. Binary distribution is what users want, IMHO.
- Lack of documentation. 
Missing features:
* Proxy authorization. Now the only way is IP-based auth (bypass.config). It is useful to have Basic HTTP auth for authorizing clients. Of course, LDAP integration is welcome. 

Briefly:
IMHO, ATS 2.1.4-unstable is stable enough for basic usage. It deserves to bear the name "beta" :) I remember that guy who uses 44Gb for in-memory caching and expects bad response time, but this is HUGE installation. As basic caching proxy, ATS works just fine. However, config files have to be simplified. I am thinking about writing wiki page "ATS installation as forward proxy", but I think today this is waste of time. You are reviewing configs, right? I mean that "log2 -> log" change. This is right direction. 

Thanx for your work,
Alexey



On Tue, 07 Dec 2010 09:04:58 -0700, Leif Hedstrom <zw...@apache.org> wrote:
> On 12/07/2010 08:10 AM, a@test123.ru wrote:
>> More info.
>>
>> 1) Problem appears in both transparent and explicit proxy.
>> 2) wget http://<VIDEOFILE>  works. Problem appears in Opera, Firefox, Windows and Linux.
>> 3) I tested video from youtube, seems like it works. However, video from vkontakte.ru causes
>> ERR_CLIENT_ABORT very often.
>> 4) I tested squid and video works fine.
>>
>> My idea is - problem is socket between browser and ATS. I changed:
>>
>> CONFIG proxy.config.http.keep_alive_no_activity_timeout_in INT 0 # was 15
>> CONFIG proxy.config.http.transaction_no_activity_timeout_in INT 0 # was 30
>>
>> Error gone. Video stable.
>> Is it right change? May it cause other problems?
> 
> Heh, i was going to reply to your first email and suggest increasing 
> timeouts. Fwiw,  0 means no timeout. The first one doesn't make a whole 
> lot of sense, all that does is to let your browsers control KA timeouts 
> (which you might want, but I can't see how that affects this problem). 
> The second one however could cause problems, if the site burst highly 
> for some short amount of time, and then goes idle for a long time.
> 
> I wa also going to suggest increasing 
> proxy.config.http.transaction_active, which I've noticed causes problems 
> with youtube if set too low (the new default for that one is 900s, which 
> is 15 minutes). Setting any of these timeouts to 0 has its own risk 
> (abuse, bad clients etc.), it'd probably be better to jack them up high 
> enough that the problem goes away, but that you still have some sort of 
> timeout.
> 
> Curious to hear about your experiences (other than the timeouts) too. Is 
> Apache TS working as you expected so far? Any problems, concerns, 
> missing features, crashes etc.?
> 
> Cheers,
> 
> -- Leif


Re: ERR_CLIENT_ABORT

Posted by Leif Hedstrom <zw...@apache.org>.
On 12/07/2010 08:10 AM, a@test123.ru wrote:
> More info.
>
> 1) Problem appears in both transparent and explicit proxy.
> 2) wget http://<VIDEOFILE>  works. Problem appears in Opera, Firefox, Windows and Linux.
> 3) I tested video from youtube, seems like it works. However, video from vkontakte.ru causes ERR_CLIENT_ABORT very often.
> 4) I tested squid and video works fine.
>
> My idea is - problem is socket between browser and ATS. I changed:
>
> CONFIG proxy.config.http.keep_alive_no_activity_timeout_in INT 0 # was 15
> CONFIG proxy.config.http.transaction_no_activity_timeout_in INT 0 # was 30
>
> Error gone. Video stable.
> Is it right change? May it cause other problems?

Heh, i was going to reply to your first email and suggest increasing 
timeouts. Fwiw,  0 means no timeout. The first one doesn't make a whole 
lot of sense, all that does is to let your browsers control KA timeouts 
(which you might want, but I can't see how that affects this problem). 
The second one however could cause problems, if the site burst highly 
for some short amount of time, and then goes idle for a long time.

I wa also going to suggest increasing 
proxy.config.http.transaction_active, which I've noticed causes problems 
with youtube if set too low (the new default for that one is 900s, which 
is 15 minutes). Setting any of these timeouts to 0 has its own risk 
(abuse, bad clients etc.), it'd probably be better to jack them up high 
enough that the problem goes away, but that you still have some sort of 
timeout.

Curious to hear about your experiences (other than the timeouts) too. Is 
Apache TS working as you expected so far? Any problems, concerns, 
missing features, crashes etc.?

Cheers,

-- Leif


Re: Weird continue records

Posted by Glen Stampoultzis <gs...@iinet.net.au>.
As far as I can tell yes.  It seems that you don't need to present them in 
that format when serializing.  You just need to handle it when decoding 
existing records.

-- Glen

At 12:39 AM 19/08/2004, you wrote:
>Just curious, is that X threshold consistent?  40 minute turnaround time 
>is pretty good for weird stuff like this :)
>
>Glen Stampoultzis wrote:
>
>>Okay... sometimes it helps to say things in public.
>>What happens is this:
>>Usually you get drawing records followed by OBJ record or text object 
>>records repeating for as many shapes as you have in your sheet.  This is 
>>complicated by the fact that the escher records actually form one stream 
>>that is split across those drawing records.  Painful but at least sort of 
>>consistent.
>>It seems that the Excel file format people didn't like this consistency 
>>however. They had this great idea that after writing a certain number of 
>>records this way they should change things and start writing continue 
>>records.  Now the thing with continue records is that they were supposed 
>>to be for continuing records that have grown past the max record size and 
>>they immediately follow the record they are continuing.
>>You've probably guessed by now that the way they're being used for 
>>drawing records does not follow this pattern.  After write X records 
>>Excel will start writing records in the pattern: OBJ -> CONTINUE -> OBJ 
>>-> CONTINUE etc.  One might logically think that the continue record is 
>>continuing the OBJ record but it is actually continuing the very last 
>>drawing record we ran across.
>>Got to love Excel.
>>Regards,
>>Glen
>>At 11:10 AM 18/08/2004, you wrote:
>>
>>>I'm probably on my own with this but I thought I'd throw it out there 
>>>anyway.
>>>
>>>I've got some really strange behavior I noticed with OBJ records.
>>>When you have an Excel sheet with a really large number of OBJ records 
>>>Excel will sometimes trail it with a continue record.  The really weird 
>>>thing is that it doesn't seem to be a continuation of the OBJ 
>>>record.  Take this real life example:
>>>
>>>
>>>OBJ Record:
>>>
>>>00000000 15 00 12 00 01 00 4E 00 11 60 00 00 00 00 84 BB ......N..`......
>>>00000010 E8 00 00 00 00 00 00 00 00 00                   ..........
>>>
>>>[OBJ]
>>>SUBRECORD: [ftCmo]
>>>     .objectType           = 0x0001 (1 )
>>>     .objectId             = 0x004E (78 )
>>>     .option               = 0x6011 (24593 )
>>>          .locked                   = true
>>>          .printable                = true
>>>          .autofill                 = true
>>>          .autoline                 = true
>>>     .reserved1            = 0x00000000 (0 )
>>>     .reserved2            = 0x00E8BB84 (15252356 )
>>>     .reserved3            = 0x00000000 (0 )
>>>[/ftCmo]
>>>SUBRECORD: [ftEnd]
>>>[/ftEnd]
>>>[/OBJ]
>>>
>>>Followed by the continue block below:
>>>
>>>00000000 0F 00 04 F0 64 00 00 00 42 01 0A F0 08 00 00 00 ....d...B.......
>>>00000010 4F 04 00 00 00 0A 00 00 73 00 0B F0 2A 00 00 00 O.......s...*...
>>>00000020 BF 00 08 00 08 00 44 01 04 00 00 00 7F 01 00 00 ......D.........
>>>00000030 01 00 BF 01 00 00 10 00 C0 01 40 00 00 08 D1 01 ..........@.....
>>>00000040 01 00 00 00 FF 01 10 00 10 00 00 00 10 F0 12 00 ................
>>>00000050 00 00 00 00 04 00 D0 02 10 00 1E 00 05 00 A0 02 ................
>>>00000060 13 00 5A 00 00 00 11 F0 00 00 00 00             ..Z.........
>>>
>>>
>>>As you can see the OBJ record is complete (you can tell by the 
>>>ftEnd).  The continue makes no sense in this context.  Unless it's 
>>>continuing the record before the OBJ for some sick reason... hrrm..
>>>
>>>Regards,
>>>
>>>
>>>Glen Stampoultzis
>>>gstamp@iinet.net.au
>>>http://members.iinet.net.au/~gstamp/glen/
>>>
>>>
>>>---------------------------------------------------------------------
>>>To unsubscribe, e-mail: poi-dev-unsubscribe@jakarta.apache.org
>>>For additional commands, e-mail: poi-dev-help@jakarta.apache.org
>>
>>Glen Stampoultzis
>>gstamp@iinet.net.au
>>http://members.iinet.net.au/~gstamp/glen/
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: poi-dev-unsubscribe@jakarta.apache.org
>For additional commands, e-mail: poi-dev-help@jakarta.apache.org
>


Glen Stampoultzis
gstamp@iinet.net.au
http://members.iinet.net.au/~gstamp/glen/

Re: Weird continue records

Posted by Danny Mui <da...@muibros.com>.
Just curious, is that X threshold consistent?  40 minute turnaround time 
is pretty good for weird stuff like this :)

Glen Stampoultzis wrote:

> 
> Okay... sometimes it helps to say things in public.
> 
> What happens is this:
> 
> Usually you get drawing records followed by OBJ record or text object 
> records repeating for as many shapes as you have in your sheet.  This is 
> complicated by the fact that the escher records actually form one stream 
> that is split across those drawing records.  Painful but at least sort 
> of consistent.
> 
> It seems that the Excel file format people didn't like this consistency 
> however. They had this great idea that after writing a certain number of 
> records this way they should change things and start writing continue 
> records.  Now the thing with continue records is that they were supposed 
> to be for continuing records that have grown past the max record size 
> and they immediately follow the record they are continuing.
> 
> You've probably guessed by now that the way they're being used for 
> drawing records does not follow this pattern.  After write X records 
> Excel will start writing records in the pattern: OBJ -> CONTINUE -> OBJ 
> -> CONTINUE etc.  One might logically think that the continue record is 
> continuing the OBJ record but it is actually continuing the very last 
> drawing record we ran across.
> 
> Got to love Excel.
> 
> Regards,
> 
> Glen
> 
> At 11:10 AM 18/08/2004, you wrote:
> 
>> I'm probably on my own with this but I thought I'd throw it out there 
>> anyway.
>>
>> I've got some really strange behavior I noticed with OBJ records.  
>> When you have an Excel sheet with a really large number of OBJ records 
>> Excel will sometimes trail it with a continue record.  The really 
>> weird thing is that it doesn't seem to be a continuation of the OBJ 
>> record.  Take this real life example:
>>
>>
>> OBJ Record:
>>
>> 00000000 15 00 12 00 01 00 4E 00 11 60 00 00 00 00 84 BB ......N..`......
>> 00000010 E8 00 00 00 00 00 00 00 00 00                   ..........
>>
>> [OBJ]
>> SUBRECORD: [ftCmo]
>>     .objectType           = 0x0001 (1 )
>>     .objectId             = 0x004E (78 )
>>     .option               = 0x6011 (24593 )
>>          .locked                   = true
>>          .printable                = true
>>          .autofill                 = true
>>          .autoline                 = true
>>     .reserved1            = 0x00000000 (0 )
>>     .reserved2            = 0x00E8BB84 (15252356 )
>>     .reserved3            = 0x00000000 (0 )
>> [/ftCmo]
>> SUBRECORD: [ftEnd]
>> [/ftEnd]
>> [/OBJ]
>>
>> Followed by the continue block below:
>>
>> 00000000 0F 00 04 F0 64 00 00 00 42 01 0A F0 08 00 00 00 ....d...B.......
>> 00000010 4F 04 00 00 00 0A 00 00 73 00 0B F0 2A 00 00 00 O.......s...*...
>> 00000020 BF 00 08 00 08 00 44 01 04 00 00 00 7F 01 00 00 ......D.........
>> 00000030 01 00 BF 01 00 00 10 00 C0 01 40 00 00 08 D1 01 ..........@.....
>> 00000040 01 00 00 00 FF 01 10 00 10 00 00 00 10 F0 12 00 ................
>> 00000050 00 00 00 00 04 00 D0 02 10 00 1E 00 05 00 A0 02 ................
>> 00000060 13 00 5A 00 00 00 11 F0 00 00 00 00             ..Z.........
>>
>>
>> As you can see the OBJ record is complete (you can tell by the 
>> ftEnd).  The continue makes no sense in this context.  Unless it's 
>> continuing the record before the OBJ for some sick reason... hrrm..
>>
>> Regards,
>>
>>
>> Glen Stampoultzis
>> gstamp@iinet.net.au
>> http://members.iinet.net.au/~gstamp/glen/
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: poi-dev-unsubscribe@jakarta.apache.org
>> For additional commands, e-mail: poi-dev-help@jakarta.apache.org
>>
> 
> 
> Glen Stampoultzis
> gstamp@iinet.net.au
> http://members.iinet.net.au/~gstamp/glen/
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: poi-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: poi-dev-help@jakarta.apache.org


Re: Configuring SSL on default Broker

Posted by Chris Odom <ch...@mediadriver.com>.
Further debugging reveals that the SpringSSLContext is parsed correctly
from the activemq-broker.xml but during the binding process I found this:

org.apache.activemq.transport.TransportFactory
    public static TransportServer bind(BrokerService brokerService, URI
location) throws IOException {
        TransportFactory tf = findTransportFactory(location);
        if( brokerService!=null && tf instanceof BrokerServiceAware ) {
            ((BrokerServiceAware)tf).setBrokerService(brokerService);
        }
        try {
            if( brokerService!=null ) {
               
SslContext.setCurrentSslContext(brokerService.getSslContext());
            }
            return tf.doBind(location);
        } finally {
            SslContext.setCurrentSslContext(null);
        }
    }

org.apache.activemq.broker.SslContext
    static public void setCurrentSslContext(SslContext bs) {
        current.set(bs);
    }

The TransportFactory calls setCurrentSslContext twice in which the second
call sets the sslcontext to null.

On Tue, 24 Apr 2012 09:43:03 -0500, Chris Odom
<ch...@mediadriver.com>
wrote:
> Furthering debugging of the issue has revealed that the create
> SslSocketConnector's sslContextFactory does not have the keyStore
created
> as in, its null. I have also noticed that when using either
configuration
> below the keyStorePassword is mucked up as well:
> 
> <!-- SSL context used for both http(s) and ssl transport -->
>         <sslContext>
>             <sslContext keyStore="${karaf.home}/etc/jsse/localhost.ks"
> keyStorePassword="changeit" />
>         </sslContext>
> 
> <!-- SSL context used for both http(s) and ssl transport -->
>         <sslContext>
>             <sslContext
> keyStore="file:${karaf.home}/etc/jsse/localhost.ks"
> keyStorePassword="changeit" />
>         </sslContext>
> 
> With the above listed configurations the keyStorePassword ends up being
> just the letter 't' and not 'changeit';
> 
> I am currently using apache-servicemix-4.4.1-fuse-03-06 and any help in
> this would be deeply appreciated.
> 
> and yes the sslContext element is in A-Z order with in the broker
element.
> 
> Thanks
> Chris O.
> 
> 
> On Mon, 23 Apr 2012 17:30:52 -0500, Chris Odom
> <ch...@mediadriver.com>
> wrote:
>> I am currently trying to setup both a https and ssl transport connector
>> for the default broker. I am using servicemix deploying a blueprint
> version
>> of the activemq-broker.xml and have followed all how-to with no
success.
>> below is an excerpt of my broker.xml file for sslcontext configuration:
>> 
>> 
>> When I start update servicemix with in the console I get prompted with
>> "org.eclipse.jetty.ssl.password : " 
>> 
>> If you attempt to type something in
>> by the 3 character it just returns with out hitting enter and prompts a
>> second doing the exact same thing and then does not prompt any more.
> With
>> in the log file I see this after the second prompts occurs: 
>> 
>> 17:21:41,961
>> | WARN | rint Extender: 3 | log | ? ? | 80 - org.eclipse.jetty.util -
>> 7.4.5.fuse20111017 | FAILED
> Krb5AndCertsSslSocketConnector@localhost:8443
>> FAILED: java.lang.IllegalStateException: SSL context is not configured
>> correctly. 
>> 
>> 17:21:41,961 | WARN | rint Extender: 3 | log | ? ? | 80 -
>> org.eclipse.jetty.util - 7.4.5.fuse20111017 | FAILED
>> org.eclipse.jetty.server.Server@2b76fbc2:
> java.lang.IllegalStateException:
>> SSL context is not configured correctly. 
>> 
>> 17:21:41,961 | ERROR | rint
>> Extender: 3 | BrokerService | ? ? | 51 -
> org.apache.activemq.activemq-core
>> - 5.5.1.fuse-03-06 | Failed to start ActiveMQ JMS Message Broker
> (default,
>> null). Reason: java.lang.IllegalStateException: SSL context is not
>> configured correctly. 
>> 
>> java.lang.IllegalStateException: SSL context is not
>> configured correctly. 
>> 
>>  at
>>
>
org.eclipse.jetty.server.ssl.SslSocketConnector.doStart(SslSocketConnector.java:338)
>> 
>> 
>>  at
>>
>
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:58)
>> 
>> 
>>  at org.eclipse.jetty.server.Server.doStart(Server.java:269) 
>> 
>>  at
>>
>
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:58)
>> 
>> 
>>  at
>>
>
org.apache.activemq.transport.http.HttpTransportServer.doStart(HttpTransportServer.java:94)
>> 
>> 
>>  at
>>
>
org.apache.activemq.transport.https.HttpsTransportServer.doStart(HttpsTransportServer.java:71)
>> 
>> 
>>  at
>> org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:54) 
>> 
>>  at
>>
>
org.apache.activemq.broker.TransportConnector.start(TransportConnector.java:250)
>> 
>> 
>>  at
>>
>
org.apache.activemq.broker.BrokerService.startTransportConnector(BrokerService.java:2206)
>> 
>> 
>>  at
>>
>
org.apache.activemq.broker.BrokerService.startAllConnectors(BrokerService.java:2119)
>> 
>> 
>>  at
>> org.apache.activemq.broker.BrokerService.start(BrokerService.java:538) 
>> 
>> 
>> at
>>
>
org.apache.activemq.broker.BrokerService.autoStart(BrokerService.java:482)
>> 
>> 
>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)[:1.6.0_26] 
>> 
>>  at
>>
>
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)[:1.6.0_26]
>> 
>> 
>>  at
>>
>
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)[:1.6.0_26]
>> 
>> 
>>  at java.lang.reflect.Method.invoke(Method.java:597)[:1.6.0_26] 
>> 
>>  at
>>
>
org.apache.aries.blueprint.utils.ReflectionUtils.invoke(ReflectionUtils.java:226)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
org.apache.aries.blueprint.container.BeanRecipe.invoke(BeanRecipe.java:824)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
org.apache.aries.blueprint.container.BeanRecipe.runBeanProcInit(BeanRecipe.java:636)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
org.apache.aries.blueprint.container.BeanRecipe.internalCreate(BeanRecipe.java:724)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
org.apache.aries.blueprint.di.AbstractRecipe.create(AbstractRecipe.java:64)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
org.apache.aries.blueprint.container.BlueprintRepository.createInstances(BlueprintRepository.java:219)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
org.apache.aries.blueprint.container.BlueprintRepository.createAll(BlueprintRepository.java:147)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
org.apache.aries.blueprint.container.BlueprintContainerImpl.instantiateEagerComponents(BlueprintContainerImpl.java:640)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
org.apache.aries.blueprint.container.BlueprintContainerImpl.doRun(BlueprintContainerImpl.java:331)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
org.apache.aries.blueprint.container.BlueprintContainerImpl.run(BlueprintContainerImpl.java:227)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)[:1.6.0_26]
>> 
>> 
>>  at
>>
>
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)[:1.6.0_26]
>> 
>> 
>>  at java.util.concurrent.FutureTask.run(FutureTask.java:138)[:1.6.0_26]
>> 
>> 
>>  at
>>
>
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)[:1.6.0_26]
>> 
>> 
>>  at
>>
>
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)[:1.6.0_26]
>> 
>> 
>>  at
>>
>
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)[:1.6.0_26]
>> 
>> 
>>  at
>>
>
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)[:1.6.0_26]
>> 
>> 
>>  at java.lang.Thread.run(Thread.java:662)[:1.6.0_26] 
>> 
>> 17:21:41,966 |
>> INFO | rint Extender: 3 | BrokerService | ? ? | 51 -
>> org.apache.activemq.activemq-core - 5.5.1.fuse-03-06 | ActiveMQ Message
>> Broker (default, null) is shutting down 
>> 
>> 17:21:41,967 | INFO | rint
>> Extender: 3 | log | ? ? | 80 - org.eclipse.jetty.util -
> 7.4.5.fuse20111017
>> | stopped o.e.j.s.ServletContextHandler{/,null} 
>> 
>> 17:21:42,019 | INFO |
>> rint Extender: 3 | TransportConnector | ? ? | 51 -
>> org.apache.activemq.activemq-core - 5.5.1.fuse-03-06 | Connector jetty
>> Stopped 
>> 
>> 17:21:42,019 | INFO | rint Extender: 3 | TransportConnector | ? ?
>> | 51 - org.apache.activemq.activemq-core - 5.5.1.fuse-03-06 | Connector
> ssl
>> Stopped 
>> 
>> 17:21:42,019 | INFO | rint Extender: 3 | TransportConnector | ? ?
>> | 51 - org.apache.activemq.activemq-core - 5.5.1.fuse-03-06 | Connector
>> openwire Stopped 
>> 
>> 17:21:42,019 | INFO | rint Extender: 3 |
>> TransportConnector | ? ? | 51 - org.apache.activemq.activemq-core -
>> 5.5.1.fuse-03-06 | Connector stomp Stopped 
>> 
>> 17:21:42,023 | INFO | rint
>> Extender: 3 | KahaDBStore | ? ? | 51 -
org.apache.activemq.activemq-core
> -
>> 5.5.1.fuse-03-06 | Stopping async queue tasks 
>> 
>> 17:21:42,023 | INFO | rint
>> Extender: 3 | KahaDBStore | ? ? | 51 -
org.apache.activemq.activemq-core
> -
>> 5.5.1.fuse-03-06 | Stopping async topic tasks 
>> 
>> 17:21:42,023 | INFO | rint
>> Extender: 3 | KahaDBStore | ? ? | 51 -
org.apache.activemq.activemq-core
> -
>> 5.5.1.fuse-03-06 | Stopped KahaDB 
>> 
>> 17:21:42,318 | INFO | rint Extender: 3
>> | BrokerService | ? ? | 51 - org.apache.activemq.activemq-core -
>> 5.5.1.fuse-03-06 | ActiveMQ JMS Message Broker (default, null) stopped
>> 
>> 
>> 17:21:42,319 | ERROR | rint Extender: 3 | BlueprintContainerImpl | ? ?
|
>> 10 - org.apache.aries.blueprint - 0.3.1 | Unable to start blueprint
>> container for bundle activemq-broker.xml
>> 
>> 
>> org.osgi.service.blueprint.container.ComponentDefinitionException:
> Unable
>> to intialize bean .component-2 
>> 
>>  at
>>
>
org.apache.aries.blueprint.container.BeanRecipe.runBeanProcInit(BeanRecipe.java:638)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
org.apache.aries.blueprint.container.BeanRecipe.internalCreate(BeanRecipe.java:724)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
org.apache.aries.blueprint.di.AbstractRecipe.create(AbstractRecipe.java:64)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
org.apache.aries.blueprint.container.BlueprintRepository.createInstances(BlueprintRepository.java:219)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
org.apache.aries.blueprint.container.BlueprintRepository.createAll(BlueprintRepository.java:147)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
org.apache.aries.blueprint.container.BlueprintContainerImpl.instantiateEagerComponents(BlueprintContainerImpl.java:640)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
org.apache.aries.blueprint.container.BlueprintContainerImpl.doRun(BlueprintContainerImpl.java:331)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
org.apache.aries.blueprint.container.BlueprintContainerImpl.run(BlueprintContainerImpl.java:227)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)[:1.6.0_26]
>> 
>> 
>>  at
>>
>
java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)[:1.6.0_26]
>> 
>> 
>>  at java.util.concurrent.FutureTask.run(FutureTask.java:138)[:1.6.0_26]
>> 
>> 
>>  at
>>
>
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)[:1.6.0_26]
>> 
>> 
>>  at
>>
>
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:206)[:1.6.0_26]
>> 
>> 
>>  at
>>
>
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)[:1.6.0_26]
>> 
>> 
>>  at
>>
>
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)[:1.6.0_26]
>> 
>> 
>>  at java.lang.Thread.run(Thread.java:662)[:1.6.0_26] 
>> 
>> Caused by:
>> java.lang.IllegalStateException: SSL context is not configured
> correctly.
>> 
>> 
>>  at
>>
>
org.eclipse.jetty.server.ssl.SslSocketConnector.doStart(SslSocketConnector.java:338)
>> 
>> 
>>  at
>>
>
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:58)
>> 
>> 
>>  at org.eclipse.jetty.server.Server.doStart(Server.java:269) 
>> 
>>  at
>>
>
org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:58)
>> 
>> 
>>  at
>>
>
org.apache.activemq.transport.http.HttpTransportServer.doStart(HttpTransportServer.java:94)
>> 
>> 
>>  at
>>
>
org.apache.activemq.transport.https.HttpsTransportServer.doStart(HttpsTransportServer.java:71)
>> 
>> 
>>  at
>> org.apache.activemq.util.ServiceSupport.start(ServiceSupport.java:54) 
>> 
>>  at
>>
>
org.apache.activemq.broker.TransportConnector.start(TransportConnector.java:250)
>> 
>> 
>>  at
>>
>
org.apache.activemq.broker.BrokerService.startTransportConnector(BrokerService.java:2206)
>> 
>> 
>>  at
>>
>
org.apache.activemq.broker.BrokerService.startAllConnectors(BrokerService.java:2119)
>> 
>> 
>>  at
>> org.apache.activemq.broker.BrokerService.start(BrokerService.java:538) 
>> 
>> 
>> at
>>
>
org.apache.activemq.broker.BrokerService.autoStart(BrokerService.java:482)
>> 
>> 
>>  at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
>> Method)[:1.6.0_26] 
>> 
>>  at
>>
>
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)[:1.6.0_26]
>> 
>> 
>>  at
>>
>
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)[:1.6.0_26]
>> 
>> 
>>  at java.lang.reflect.Method.invoke(Method.java:597)[:1.6.0_26] 
>> 
>>  at
>>
>
org.apache.aries.blueprint.utils.ReflectionUtils.invoke(ReflectionUtils.java:226)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
org.apache.aries.blueprint.container.BeanRecipe.invoke(BeanRecipe.java:824)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  at
>>
>
org.apache.aries.blueprint.container.BeanRecipe.runBeanProcInit(BeanRecipe.java:636)[10:org.apache.aries.blueprint:0.3.1]
>> 
>> 
>>  ... 15 more Any ideas on why this is happening or why this would occur
>> would be deeply appreciated.

-- 
Thanks,
Chris Odom
512:799-0270

Re: CLI caching, etc

Posted by Vadim Gritsenko <va...@verizon.net>.
Upayavira wrote:

>>>>Before you go further with this... Look at method
>>>>isResponseModified() in [1].
>>>>
>>>>What you need to do is to:
>>>>1. Implement method isResponseModified() for command line
>>>>environment. 2. In the CLI, get the file corresponding to the
>>>>request URI, and get its last modification time. 3. Populate
>>>>environment with this modification time (this will be similar to
>>>>If-Modified-Since date header in http). 4. Call cocoon. It will skip
>>>>generation if response is not modified, and won't even read it from
>>>>cache.
>>>>   
>>>>
>>>>        
>>>>
>>>Very interesting. So Cocoon can tell me if something has been
>>>modified. Great. 
>>>      
>>>
>
>  
>
>>Yes, and it works in http env.
>>    
>>
>
>I've implemented something around this, with a cache that seems more or less to 
>work. However, when I run org.apache.cocoon.Cocoon.process(), my methods that 
>I've implemented on the AbstractCommandLineEnvironment do not get called (i.e. 
>isResponseModified and setResponseIsNotModified). What do I need to do to get 
>Cocoon to actually call these methods on my environment?
>

/me doing some digging...

There is a reference to the isResponseModified in 
AbstractProcessingPipeline.checkLastModified [1]. From what I see, this 
method is intended to work only for readers. Which is unfortunate. What 
do you think - can we extend pipeline implementation to support this for 
event pipelines too? Sylvain / Carsten, opinion? :-)

(I don't have Cocoon checkout at hands right now; so can't give better 
advice)

Vadim

[1] 
http://cvs.apache.org/viewcvs.cgi/cocoon-2.1/src/java/org/apache/cocoon/components/pipeline/AbstractProcessingPipeline.java?rev=1.1&content-type=text/vnd.viewcvs-markup



Re: CLI caching, etc

Posted by Upayavira <uv...@upaya.co.uk>.
> >>Before you go further with this... Look at method
> >>isResponseModified() in [1].
> >> 
> >>What you need to do is to:
> >>1. Implement method isResponseModified() for command line
> >>environment. 2. In the CLI, get the file corresponding to the
> >>request URI, and get its last modification time. 3. Populate
> >>environment with this modification time (this will be similar to
> >>If-Modified-Since date header in http). 4. Call cocoon. It will skip
> >>generation if response is not modified, and won't even read it from
> >>cache.
> >>    
> >>
> >
> >Very interesting. So Cocoon can tell me if something has been
> >modified. Great. 

> Yes, and it works in http env.

I've implemented something around this, with a cache that seems more or less to 
work. However, when I run org.apache.cocoon.Cocoon.process(), my methods that 
I've implemented on the AbstractCommandLineEnvironment do not get called (i.e. 
isResponseModified and setResponseIsNotModified). What do I need to do to get 
Cocoon to actually call these methods on my environment?

> >Once I've got this going, I'll get on with attempting a VFS
> >ModifiableSource (probably once I've had a three week holiday in
> >South Africa!).

> 3 week... Lucky you.

But it'll be three weeks without Cocoon :-(

Regards, Upayavira

Re: CLI caching, etc (was Re: New error handling)

Posted by Vadim Gritsenko <va...@verizon.net>.
Upayavira wrote:

>Vadim,
>
>  
>
>>>>1. Implement setStatus() in AbstractCommandLineEnvironment 
>>>>(implementation is empty right now)
>>>>2. Add getStatus() to the AbstractCommandLineEnvironment
>>>>3. Test getStatus() in the CLI crawling code.
>>>>4. Test how it works and fix the broken link :)
>>>>        
>>>>
>
>Works a treat! Thanks. Although I had to modify the sitemap to give error codes 
>(thanks Jeremy for your recent mail!)
>  
>

Great.
...

>>Before you go further with this... Look at method isResponseModified()
>>in [1].
>> 
>>What you need to do is to:
>>1. Implement method isResponseModified() for command line environment.
>>2. In the CLI, get the file corresponding to the request URI, and get
>>its last modification time. 3. Populate environment with this
>>modification time (this will be similar to If-Modified-Since date
>>header in http). 4. Call cocoon. It will skip generation if response
>>is not modified, and won't even read it from cache.
>>    
>>
>
>Very interesting. So Cocoon can tell me if something has been modified. Great. 
>

Yes, and it works in http env.


>However, if the Bean is able to send pages to various locations, it might not be able to 
>identify when a page was generated without network traffic (e.g when using FTP).
>

In case of ftp you can retrieve timestamp of the file from remote ftp 
server (which is tricky). You can do "ls -l" and get timestamps for the 
all files in the directory, and save them in the hash.


>This would be unfortunate, as a large site could involve a lot of network traffic, and 
>the point of this is to avoid that.
>
>I could store locally (in my own hashed up cache) the last modified date for the page 
>and the list of links within the page, each time a page is generated. That way, when I 
>am about to generate a page, I can easily get its timestamp. If I find that I don't need 
>to generate the page, I can use my locally held list of links to follow.
>
>Does this seem reasonable?
>

It does not seems unreasonable, so it should be reasonable :)


>And finally, I have got code working to make the CLI use ModifiableSources rather 
>than Destination objects. 
>

Cool


>Do you think I need to support the Destination interface still 
>(and deprecate it), or can I just delete it entirely?
>

No; delete it entirely before it was ever released. No need to support 
never-released stuff.


>Once I've got this going, I'll get on with attempting a VFS ModifiableSource (probably 
>once I've had a three week holiday in South Africa!).
>

3 week... Lucky you.

Vadim


>Thanks again.
>
>Regards, Upayavira
>



Re: How to disable xjc ts plugin?

Posted by Daniel Kulp <dk...@apache.org>.
Well, I think the best way would be to pull the xjc plugins out of the cxf-
bundle.   They aren't used at all at runtime for anything.   The cxf-manifest 
already would pick them up if the jars are there.   Thus, if they were 
external jars, the solution would be to just delete the jar.   

Want to log a JIRA?  (and including a patch would be great.  :-)


Dan


On Tue January 5 2010 12:14:08 pm wytas wrote:
> thank you for Your suggestion. it worked and its ok for me to modify cxf
>  jar
> 
> :)
> 
> wondering what could be "official" way to avoid this problem?
> 

-- 
Daniel Kulp
dkulp@apache.org
http://www.dankulp.com/blog

Re: How to disable xjc ts plugin?

Posted by wytas <wy...@freemail.lt>.
thank you for Your suggestion. it worked and its ok for me to modify cxf jar
:)

wondering what could be "official" way to avoid this problem?
-- 
View this message in context: http://old.nabble.com/How-to-disable-xjc-ts-plugin--tp26110271p27026665.html
Sent from the cxf-user mailing list archive at Nabble.com.


Re: Graph results in distributed mode

Posted by un...@prometeo.it.
I tried to run the same tests with jmeter 1.8.1 and there is no problem.
So maybe is a problem only with nighlty builds?

Umberto

Quoting "mstover1@apache.org" <ms...@apache.org>:

> Thanks for the shot.  It looks like you're not getting 2 lines, but rather, a
> single 
> very disjointed line - which shouldn't be happening either.  I'm looking into
> it.
> 
> On 7 May 2003 at 15:20, Umberto Nicoletti wrote:
> 
> > Mike, 
> > sorry for late reply, but I was out of office. 
> > 
> > Attached you can find two screen shots: one with throughput not being
> > displayed and another with throughput displayed. 
> > You can clearly see a green line at the bottom and one climbing up. 
> > While I can explain the bottom one with errors in my clients (they might
> > not be enabled to reach the test server*) I don't understand why there
> > are two lines. 
> > 
> > *This is actually a test I set up just to grab screenshots, so I don't
> > care about results at all. 
> > 
> > Also I noticed that if the last sample is added from one of the servers
> > that cannot reach the web then total troughput is 0 (i might be worng
> > and probably should investigate the routine that calculates throughput),
> > but if sample is from other server then throughput is a more reasonable
> > number such as 300. 
> > 
> > Umberto 
> > 
> > On Mon, 2003-05-05 at 21:09, mstover1@apache.org wrote: 
> > > Can you send me a screengrab of the graph visualizer doing this?  I don't
> 
> see 
> > > how it's possible, and I've never seen it.
> > > 
> > > -Mike
> > > 
> > > On 5 May 2003 at 17:18, unicoletti@prometeo.it wrote:
> > > 
> > > > Hi all,
> > > > I am using Jmeter and quite happy about it (thanks to all the apache 
> guys, really!).
> > > > 
> > > > Now, I have some problems in interpreting graph results when in 
> distributed 
> > > mode.
> > > > My laptop serves a client for 4 jmeter servers. I noticed that 
> GraphResults
> > > > draws 4 green lines. So, is it that throughput is calculated separately
> 
> for 
> > > each
> > > > server?
> > > > But is that what one would expect? Honestly I'd expect results to be 
> > > condensed...
> > > > What about the value in the text label below the graph?
> > > > 
> > > > Then why aren't average, median, and samples reported separately too?
> > > > 
> > > > I am using jmeter 20030504 from nightlies.
> > > > 
> > > > Regards,
> > > > Umberto
> > > > 
> > > > 
> > > > 
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > > > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org
> > > > 
> > > 
> > > 
> > > 
> > > --
> > > Michael Stover
> > > mstover1@apache.org
> > > Yahoo IM: mstover_ya
> > > ICQ: 152975688
> > > AIM: mstover777
> > -- 
> > Umberto Nicoletti
> > unicoletti@prometeo.it | Tel. +390415701366
> > 
> > "We'll try to make different mistakes this time." - Larry Wall
> > 
> 
> 
> 
> --
> Michael Stover
> mstover1@apache.org
> Yahoo IM: mstover_ya
> ICQ: 152975688
> AIM: mstover777
> 




---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: postponed rename bugs/features concern

Posted by Chris Hecker <ch...@d6.com>.
>Bit of a priority difference here: The important 1.0 features are the
>ones we *share* with CVS, while many (though not all) of the
>differentiators are gravy.

I agree with this to some extent, but I would put forward a few features 
that are the main reasons for switching, or will be, and really, if svn 
came out as just the shared features there'd be no reason to care.  In no 
particular order:

- atomic commits
- real renames
- versioned directories

I would assume other people have different lists, but these are mine.  Hah, 
I just looked at the homepage and hey, they're the first ones mentioned 
after "most cvs features"!

Anyway, I think you guys need to come out of the gate strong on those 
features.  The rename thing is really the only broken one of those right 
now (and diffing across renames if that still doesn't work right).  It 
seems like a bad idea not to fix it before 1.0.  Also, it's not something 
somebody like me can just go write and contribute...it's going to be a 
schema change (presumably), and if you know you're going to need to do 
that, get it done before people are counting on stability from release to 
release like they will after 1.0.

>Agree that true repository renames would be nice, but I still think
>they're Post-1.0 and it's not a crisis for those who use Subversion
>before then.  Renames in those repositories will be represented as
>copies+deletes, and will continue to be so represented after any
>schema switch.  So there's backwards compatibility here.

Except tools written to do anything with svn repositories post-real-rename 
will silently ignore those renames, which seems like it's setting a land 
mine for later developers.  I'd think you'd want to get changes like this 
that can affect the future in now.

>There was a thread here recently, "svn_fs_merge not used" I believe,
>that brought up the rename question.

I'll check it out, thanks.

Chris



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: SAX 2.0, sealing, Tomcat 3.2.3

Posted by "Pier P. Fumagalli" <pi...@betaversion.org>.
Andrew Cooke at andrew@intertrader.com wrote:
>
> I don't want to use 4.0 as it's beta.

Don't worry about being a beta... It's stable, _VERY_ stable.

    Pier


Re: SAX 2.0, sealing, Tomcat 3.2.3

Posted by Andrew Cooke <an...@intertrader.com>.
At 11:14 AM 7/31/01 -0300, you wrote:
> > At 02:42 PM 7/31/01 +0100, you wrote:
> > [...]
> >>In particular, our code gives the "usual" sealed jar exception unless I
> >>unseal our copy of xalan.jar, at which point I get a NoSuchMethod call
> >>when executing
> >>  SAXParserFactory factory = SAXParserFactory.newInstance()
> >
> > Grrr.  Wrong line.  The error comes from:
> >
> >      myParser = factory.newSAXParser().getXMLReader();
> >
> > Sorry,
> > Andrew
>De todos modos, con Tomcat 4.0 ya no tendrias ese inconveniente. Puedes usar
>varios parsers factories (uno en cada contexto) pues mantiene un classloader
>diferente por contexto. Suerte!!! Good luck!
>Bernardo

Gracias, pero sabes como hacerlo com 3.2.3?  Si no se puede, supongo que 
vamos a usar el 4.0, pero no me gustaria porque es beta, no mas...

Andrew

(In English:
- You can do it with 4.0 with a different classloader for each context/parser.
- Is it possible with 3.2.3?  I don't want to use 4.0 as it's beta.)


Re: got the license for InstallShield

Posted by Ben Laurie <be...@algroup.co.uk>.
Brian Behlendorf wrote:
> 
> At 06:50 PM 8/8/97 +0100, you wrote:
> >Brian Behlendorf wrote:
> >>
> >> We got a license for InstallShield for Apache NT.  They'll be emailing the
> >> paperwork; I had it made out to "the Apache Group", which is what our code
> >> license is under.  This is InstallShield 5, with "Packaged for the Web",
> >> which means people just download an executable, run it, and it uncompresses
> >> itself and launches into the install process.  Woohoo!
> >
> >Cool. How do we actually get hold of it?
> 
> I'll be getting an email with download instructions and the license.  Since
> it's commercial software, we'll probably have to have a designated "NT
> builder" who is in possession of the license and software at any one point
> in time.  I'll let you all know more when I get it :)

That's a pain. We want dispensation to have more than one install. I
presume that they are protected against abuse by building "the Apache
Group" into the install?

> I certainly expect whatever config files we generate for IS can also be
> distributed in source code form, put under CVS, etc.

I hope so. Anyway, kudos to InstallShield for doing this, and doing it
so fast. We must remember to credit them in the appropriate places.

Cheers,

Ben.

-- 
Ben Laurie                Phone: +44 (181) 994 6435  Email:
ben@algroup.co.uk
Freelance Consultant and  Fax:   +44 (181) 994 6472
Technical Director        URL: http://www.algroup.co.uk/Apache-SSL
A.L. Digital Ltd,         Apache Group member (http://www.apache.org)
London, England.          Apache-SSL author

Re: Disabling chunking on Tomcat 4.1.24

Posted by Bill Barker <wb...@wilshire.com>.
The newer CoyoteConnector doesn't respect the "allowChunking" attribute.
All you can do is to (as Justin says below) is to set the content-length
header (via response.setContentLength).  Assuming that your responses aren't
in the mega-byte family, this is easy enough to do in a Filter.

"Justin Ruthenbeck" <ju...@nextengine.com> wrote in message
news:4.3.2.20030513102319.00b6fb18@pop.synerdyne.com...
>
> Shiva --
>
> I ran into the same problem (chunked responses sent to a non-standard
> client that couldn't handle them) quite some time ago.  I don't remember
> all the details, but ...
>
> There's an optional attribute in the <Connector> tag of your server.xml
> called "allowChunking".  In Tomcat 4.0.3 (which I was using at the time),
> there was a bug associated with it that forced me to set this to false
> *and* set the Content-Length header in order to avoid chunking.  This
meant
> that I had to store the response and send it after all processing was done
> in order to compute the Content-Length value until we could find another
> solution (obviously this isn't practical for large response sizes).
>
> Alternately (or additionally), you can increase the size of the response
> buffer (response.setBufferSize()) such that the entire response fits in
the
> buffer.  If this is the case, I seem to recall that the response won't be
> Chunked (since the Tomcat/connector response objects can calculate the
> Content-Length themselves).
>
> Sorry to be wishy-washy on this stuff -- I just don't remember the
> specifics ... hopefully it'll give you a starting point.
>
> Cheers,
> justin
>
>
> At 10:17 PM 5/12/2003, you wrote:
>
> >Hi
> >
> >Is it possible not to perform "chunking" even if the client sets HTTP 1.1
?
> >
> >I have an interoperability problem (see problem below) and would like to
> >get ur opinion.
> >
> >Kindly respond directly to me, since Iam not on this list
> >
> >Thanks in advance
> >!shiva
> >
> >----------------- SOAP Interoperability problem -----
> >Iam having the following problem between a VB application using MSSOAP
> >2.0(using high level API) and Axis 1.1RC2 running on Tomcat 4.1.24 (HTTP
> >1.1 coyote connector)
> >
> >The HTTP POST request from MSSOAP sets HTTP 1.1 on request message and
> >Axis/Tomcat
> >responds with chunked data as follows
> >HTTP headers
> >>Transfer-encoding : chunked ...
> >>1b1
> >><?xml ...?>
> >>0
> >>
> >>My guess is that the MSSOAP is not able to handle chunking. When I use
> >>the MSSOAP2.0 SOAP tracer
> >>it can display the request messages well and not the response message.
> >>
> >>I think the problem can be solved if the request's HTTP version is
> >>stepped down to 1.0 or
> >tomcat does not
> >do chunking. Any help on highlighting the solution (if any) would be
> >>greatly appreciated.
> >>
> >>Iam going to try MSSOAP30 and see if the problem goes away.
> >>
> >>Kindly send responses directly to me, since Iam not on the list.
> >>
> >>Thanks in advance
> >>!shiva
> >
> >
> >
> >
> >---------------------------------------------------------------------
> >To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
> >For additional commands, e-mail: tomcat-user-help@jakarta.apache.org
>
>
> ____________________________________
> Justin Ruthenbeck
> Software Engineer, NextEngine Inc.
> justinr - AT - nextengine DOT com
> Confidential
>     See http://www.nextengine.com/confidentiality.php
> ____________________________________




---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-user-help@jakarta.apache.org


Re: Disabling chunking on Tomcat 4.1.24

Posted by Justin Ruthenbeck <ju...@nextengine.com>.
Shiva --

I ran into the same problem (chunked responses sent to a non-standard 
client that couldn't handle them) quite some time ago.  I don't remember 
all the details, but ...

There's an optional attribute in the <Connector> tag of your server.xml 
called "allowChunking".  In Tomcat 4.0.3 (which I was using at the time), 
there was a bug associated with it that forced me to set this to false 
*and* set the Content-Length header in order to avoid chunking.  This meant 
that I had to store the response and send it after all processing was done 
in order to compute the Content-Length value until we could find another 
solution (obviously this isn't practical for large response sizes).

Alternately (or additionally), you can increase the size of the response 
buffer (response.setBufferSize()) such that the entire response fits in the 
buffer.  If this is the case, I seem to recall that the response won't be 
Chunked (since the Tomcat/connector response objects can calculate the 
Content-Length themselves).

Sorry to be wishy-washy on this stuff -- I just don't remember the 
specifics ... hopefully it'll give you a starting point.

Cheers,
justin


At 10:17 PM 5/12/2003, you wrote:

>Hi
>
>Is it possible not to perform "chunking" even if the client sets HTTP 1.1 ?
>
>I have an interoperability problem (see problem below) and would like to 
>get ur opinion.
>
>Kindly respond directly to me, since Iam not on this list
>
>Thanks in advance
>!shiva
>
>----------------- SOAP Interoperability problem -----
>Iam having the following problem between a VB application using MSSOAP 
>2.0(using high level API) and Axis 1.1RC2 running on Tomcat 4.1.24 (HTTP 
>1.1 coyote connector)
>
>The HTTP POST request from MSSOAP sets HTTP 1.1 on request message and 
>Axis/Tomcat
>responds with chunked data as follows
>HTTP headers
>>Transfer-encoding : chunked ...
>>1b1
>><?xml ...?>
>>0
>>
>>My guess is that the MSSOAP is not able to handle chunking. When I use 
>>the MSSOAP2.0 SOAP tracer
>>it can display the request messages well and not the response message.
>>
>>I think the problem can be solved if the request's HTTP version is 
>>stepped down to 1.0 or
>tomcat does not
>do chunking. Any help on highlighting the solution (if any) would be
>>greatly appreciated.
>>
>>Iam going to try MSSOAP30 and see if the problem goes away.
>>
>>Kindly send responses directly to me, since Iam not on the list.
>>
>>Thanks in advance
>>!shiva
>
>
>
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
>For additional commands, e-mail: tomcat-user-help@jakarta.apache.org


____________________________________
Justin Ruthenbeck
Software Engineer, NextEngine Inc.
justinr - AT - nextengine DOT com
Confidential
    See http://www.nextengine.com/confidentiality.php
____________________________________


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-user-help@jakarta.apache.org


Re: OOo and ZipArchive serializer

Posted by Georges Roux <ge...@pacageek.org>.
Thanks I try that way, it' usable with compression but I don't know why 
not to be OOo fully compliant
there is a lot of  documentation  on SXW file format.

http://xml.openoffice.org/faq.html#4

and http://xml.openoffice.org/faq.html#10

Georges

Upayavira wrote:

>Georges,
>
>  
>
>>Well, I m not ready to post, because have some problems with
>>ZipArchive serializer In OpenOffice Writer document (sxw) there is 4
>>files: meta.xml styles.xml content.xml settings.xml and a directory
>>META-INF/
>>
>>meta.xml  is not compressed  to allow easy searching and extraction of
>>the meta data. Well, How can I do that, is there a ZipArchive
>>parameter to fixe the compression level to 0% for this file?
>>    
>>
>
>Does meta.xml need to not be compressed? Won't it work with a compressed 
>meta.xml?
>
>I've just looked into the code for the ZipSerializer (which I've never used). It doesn't 
>allow you to specify a compression level. 
>
>Now, I'm assuming you're using Cocoon 2.1. I've attached an untested patch to the 
>ZipSerializer that should make it do what you want by adding a 'method' attribute to 
>the 'entry' node. Have a go at applying the patch (at worst by cutting and pasting the 
>changes, marked by +) into the code for the ZipSerializer and rebuild Cocoon.
>
>Do you think you can handle that?
>
>If it works, I'll apply it to the latest CVS.
>
>Regards, Upayavira
>
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: cocoon-users-unsubscribe@xml.apache.org
>For additional commands, e-mail: cocoon-users-help@xml.apache.org
>
>  
>



---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-users-unsubscribe@xml.apache.org
For additional commands, e-mail: cocoon-users-help@xml.apache.org


Re: Weird continue records

Posted by Rainer Klute <kl...@rainer-klute.de>.
Am Mi, den 18.08.2004 schrieb Glen Stampoultzis um 3:51:
> You've probably guessed by now that the way they're being used for drawing 
> records does not follow this pattern.  After write X records Excel will 
> start writing records in the pattern: OBJ -> CONTINUE -> OBJ -> CONTINUE 
> etc.  One might logically think that the continue record is continuing the 
> OBJ record but it is actually continuing the very last drawing record we 
> ran across.

Weird. At least the good news is that once you know it you can cope with
it. :-)

Best regards
Rainer Klute

                           Rainer Klute IT-Consulting GmbH
  Dipl.-Inform.
  Rainer Klute             E-Mail:  klute@rainer-klute.de
  Körner Grund 24          Telefon: +49 172 2324824
D-44143 Dortmund           Telefax: +49 231 5349423


---------------------------------------------------------------------
To unsubscribe, e-mail: poi-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: poi-dev-help@jakarta.apache.org


Re: [users@httpd] strange log entries

Posted by Henry <he...@ix.netcom.com>.
They are most likely attacks against IIS servers.  I get alot of those as well - things like:

_vti_bin/owssvr.dll
_vti_bin/shtml.exe
/MSOffice/cltreq.asp

You can just ignore them.  The extra characters are attempts to fool non-patched IIS servers into running local executables and compromising the system. 

-Hank


---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: how to pass a Map to a method

Posted by Randal Walser <ra...@comcast.net>.
At 09:28 AM 1/15/2005 +0900, you wrote:
>I found the problem, but not knowing much about JavaCC, I don't know
>how to fix it.

Hey, way to go.  That sure looks like the problem alright.  I'll point
out your observation in my bugzilla report for Will Glass-Husain.

Thanks again for jumping on this.

Randal


---------------------------------------------------------------------
To unsubscribe, e-mail: velocity-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: velocity-user-help@jakarta.apache.org


Re: how to pass a Map to a method

Posted by Will Glass-Husain <wg...@forio.com>.
If you end up submitting a patch, please be sure to include a unit test for 
the formerly broken functionality.

Thanks,
WILL

----- Original Message ----- 
From: "Shinobu Kawai Yoshida" <sh...@gmail.com>
To: "Velocity Users List" <ve...@jakarta.apache.org>; "Velocity 
Developers List" <ve...@jakarta.apache.org>
Sent: Friday, January 14, 2005 4:28 PM
Subject: Re: how to pass a Map to a method


> Hi Randal,
>
>> There have been no changes to the grammar in over a year (anywhere in
>> the runtime/parser directory, anyway).  Apparently, shoring up the
>> language implementation hasn't been a priority for a while, unless the
>> developers aren't committing language changes to the repository.  I've
>> noticed some other "curiosities" in the language, as well, so maybe
>> I'll dig deeper into it myself when I get a chance.  I'll submit a
>> patch if I come up with anything useful.
>
> I found the problem, but not knowing much about JavaCC, I don't know
> how to fix it.
>
> In Parser.jjt, there are two definitions for "{", LCURLY (in a
> reference) and LEFT_CURLEY (in a map).  Parameter() allows Map() and
> Reference(), both have a definition starting with "{", but in method
> invocation, the state is in "REFMOD2", hence the LEFT_CURLEY is not
> matched.  I think the solution is something using LOOKAHEAD, but don't
> have enough time right now to fiddle around with JavaCC.
>
> Parser.jjt excerpts:
>
> Lines 510-515:
> <DIRECTIVE>
> TOKEN :
> {
>   <LEFT_CURLEY : "{" >
> | <RIGHT_CURLEY : "}" >
> }
>
> Lines 978-1011:
> <REFERENCE,REFMODIFIER,REFMOD2>
> TOKEN :
> {
>   <#ALPHA_CHAR: ["a"-"z", "A"-"Z"] >
> |   <#ALPHANUM_CHAR: [ "a"-"z", "A"-"Z", "0"-"9" ] >
> |   <#IDENTIFIER_CHAR: [ "a"-"z", "A"-"Z", "0"-"9", "-", "_" ] >
> |   <IDENTIFIER:  ( <ALPHA_CHAR> | ["_"]) (<IDENTIFIER_CHAR>)* >
> |   <DOT: "." <ALPHA_CHAR>>
>   {
>       /*
>        * push the alpha char back into the stream so the following 
> identifier
>        * is complete
>        */
>
>       input_stream.backup(1);
>
>       /*
>        * and munge the <DOT> so we just get a . when we have normal text 
> that
>        * looks like a ref.ident
>        */
>
>       matchedToken.image = ".";
>
>       if ( debugPrint )
>           System.out.print("DOT : switching to " + REFMODIFIER);
>       SwitchTo(REFMODIFIER);
>
>   }
> |   <LCURLY: "{">
> |   <RCURLY: "}">
>   {
>       stateStackPop();
>   }
> }
>
> Lines 1406-1415:
> void Map() : {}
> {
>   <LEFT_CURLEY>
>   (
>     LOOKAHEAD(2) Parameter() <COLON> Parameter() (<COMMA> Parameter()
> <COLON> Parameter() )*
>     |
>     [ <WHITESPACE> ]
>    )
>    <RIGHT_CURLEY>
> }
>
> Lines 1468-1485:
> void Reference() : {}
> {
>   /*
>    *  A reference is either ${<FOO>} or  $<FOO>
>    */
>
>     (
>        <IDENTIFIER>
>        (LOOKAHEAD(2) <DOT> (LOOKAHEAD(3) Method() | Identifier() ))*
>     )
>     |
>     (
>        <LCURLY>
>        <IDENTIFIER>
>        (LOOKAHEAD(2) <DOT> (LOOKAHEAD(3) Method() | Identifier() ))*
>        <RCURLY>
>     )
> }
>
> Lines 1442-1456:
> void Parameter() #void: {}
> {
>   [<WHITESPACE>]
>   (
>       StringLiteral()
>       | LOOKAHEAD(  <LBRACKET> [<WHITESPACE>]    ( Reference() |
> NumberLiteral())     [<WHITESPACE>] <DOUBLEDOT> ) IntegerRange()
>       | Map()
>       | ObjectArray()
>       | True()
>       | False()
>       | Reference()
>       | NumberLiteral()
>       )
>   [ <WHITESPACE>]
> }
>
> Best regards,
> -- Shinobu
>
> --
> Shinobu "Kawai" Yoshida <sh...@gmail.com>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: velocity-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: velocity-user-help@jakarta.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: velocity-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: velocity-dev-help@jakarta.apache.org


Re: how to pass a Map to a method

Posted by Shinobu Kawai Yoshida <sh...@gmail.com>.
> Here's the original posting:
>    http://mail-archives.apache.org/eyebrowse/ReadMsg?listName=velocity-user@jakarta.apache.org&msgNo=14677
> 
> ## I'll file an issue in Bugzilla tonight (JST) if nobody else does.
> Gotta go to work now.  :(

And here it is:
   http://issues.apache.org/bugzilla/show_bug.cgi?id=33113

Best regards,
-- Shinobu

--
Shinobu "Kawai" Yoshida <sh...@gmail.com>

---------------------------------------------------------------------
To unsubscribe, e-mail: velocity-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: velocity-dev-help@jakarta.apache.org


Re: how to pass a Map to a method

Posted by Shinobu Kawai Yoshida <sh...@gmail.com>.
Hi guys,

Here's the original posting:
    http://mail-archives.apache.org/eyebrowse/ReadMsg?listName=velocity-user@jakarta.apache.org&msgNo=14677

## I'll file an issue in Bugzilla tonight (JST) if nobody else does. 
Gotta go to work now.  :(

Best regards,
-- Shinobu

--
Shinobu "Kawai" Yoshida <sh...@gmail.com>

---------------------------------------------------------------------
To unsubscribe, e-mail: velocity-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: velocity-dev-help@jakarta.apache.org


Re: how to pass a Map to a method

Posted by Shinobu Kawai Yoshida <sh...@gmail.com>.
Hi Randal,

> There have been no changes to the grammar in over a year (anywhere in
> the runtime/parser directory, anyway).  Apparently, shoring up the
> language implementation hasn't been a priority for a while, unless the
> developers aren't committing language changes to the repository.  I've
> noticed some other "curiosities" in the language, as well, so maybe
> I'll dig deeper into it myself when I get a chance.  I'll submit a
> patch if I come up with anything useful.

I found the problem, but not knowing much about JavaCC, I don't know
how to fix it.

In Parser.jjt, there are two definitions for "{", LCURLY (in a
reference) and LEFT_CURLEY (in a map).  Parameter() allows Map() and
Reference(), both have a definition starting with "{", but in method
invocation, the state is in "REFMOD2", hence the LEFT_CURLEY is not
matched.  I think the solution is something using LOOKAHEAD, but don't
have enough time right now to fiddle around with JavaCC.

Parser.jjt excerpts:

Lines 510-515:
<DIRECTIVE>
TOKEN :
{
   <LEFT_CURLEY : "{" >
 | <RIGHT_CURLEY : "}" >
}

Lines 978-1011:
<REFERENCE,REFMODIFIER,REFMOD2>
TOKEN :
{
   <#ALPHA_CHAR: ["a"-"z", "A"-"Z"] >
|   <#ALPHANUM_CHAR: [ "a"-"z", "A"-"Z", "0"-"9" ] >
|   <#IDENTIFIER_CHAR: [ "a"-"z", "A"-"Z", "0"-"9", "-", "_" ] >
|   <IDENTIFIER:  ( <ALPHA_CHAR> | ["_"]) (<IDENTIFIER_CHAR>)* >
|   <DOT: "." <ALPHA_CHAR>>
   {
       /*
        * push the alpha char back into the stream so the following identifier
        * is complete
        */

       input_stream.backup(1);

       /*
        * and munge the <DOT> so we just get a . when we have normal text that
        * looks like a ref.ident
        */

       matchedToken.image = ".";

       if ( debugPrint )
           System.out.print("DOT : switching to " + REFMODIFIER);
       SwitchTo(REFMODIFIER);

   }
|   <LCURLY: "{">
|   <RCURLY: "}">
   {
       stateStackPop();
   }
}

Lines 1406-1415:
void Map() : {}
{
   <LEFT_CURLEY>
   (
     LOOKAHEAD(2) Parameter() <COLON> Parameter() (<COMMA> Parameter()
<COLON> Parameter() )*
     |
     [ <WHITESPACE> ]
    )
    <RIGHT_CURLEY>
}

Lines 1468-1485:
void Reference() : {}
{
   /*
    *  A reference is either ${<FOO>} or  $<FOO>
    */

     (
        <IDENTIFIER>
        (LOOKAHEAD(2) <DOT> (LOOKAHEAD(3) Method() | Identifier() ))*
     )
     |
     (
        <LCURLY>
        <IDENTIFIER>
        (LOOKAHEAD(2) <DOT> (LOOKAHEAD(3) Method() | Identifier() ))*
        <RCURLY>
     )
}

Lines 1442-1456:
void Parameter() #void: {}
{
   [<WHITESPACE>]
   (
       StringLiteral()
       | LOOKAHEAD(  <LBRACKET> [<WHITESPACE>]    ( Reference() |
NumberLiteral())     [<WHITESPACE>] <DOUBLEDOT> ) IntegerRange()
       | Map()
       | ObjectArray()
       | True()
       | False()
       | Reference()
       | NumberLiteral()
       )
   [ <WHITESPACE>]
}

Best regards,
-- Shinobu

--
Shinobu "Kawai" Yoshida <sh...@gmail.com>

---------------------------------------------------------------------
To unsubscribe, e-mail: velocity-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: velocity-dev-help@jakarta.apache.org


Re: how to pass a Map to a method

Posted by Shinobu Kawai Yoshida <sh...@gmail.com>.
Hi Randal,

> There have been no changes to the grammar in over a year (anywhere in
> the runtime/parser directory, anyway).  Apparently, shoring up the
> language implementation hasn't been a priority for a while, unless the
> developers aren't committing language changes to the repository.  I've
> noticed some other "curiosities" in the language, as well, so maybe
> I'll dig deeper into it myself when I get a chance.  I'll submit a
> patch if I come up with anything useful.

I found the problem, but not knowing much about JavaCC, I don't know
how to fix it.

In Parser.jjt, there are two definitions for "{", LCURLY (in a
reference) and LEFT_CURLEY (in a map).  Parameter() allows Map() and
Reference(), both have a definition starting with "{", but in method
invocation, the state is in "REFMOD2", hence the LEFT_CURLEY is not
matched.  I think the solution is something using LOOKAHEAD, but don't
have enough time right now to fiddle around with JavaCC.

Parser.jjt excerpts:

Lines 510-515:
<DIRECTIVE>
TOKEN :
{
   <LEFT_CURLEY : "{" >
 | <RIGHT_CURLEY : "}" >
}

Lines 978-1011:
<REFERENCE,REFMODIFIER,REFMOD2>
TOKEN :
{
   <#ALPHA_CHAR: ["a"-"z", "A"-"Z"] >
|   <#ALPHANUM_CHAR: [ "a"-"z", "A"-"Z", "0"-"9" ] >
|   <#IDENTIFIER_CHAR: [ "a"-"z", "A"-"Z", "0"-"9", "-", "_" ] >
|   <IDENTIFIER:  ( <ALPHA_CHAR> | ["_"]) (<IDENTIFIER_CHAR>)* >
|   <DOT: "." <ALPHA_CHAR>>
   {
       /*
        * push the alpha char back into the stream so the following identifier
        * is complete
        */

       input_stream.backup(1);

       /*
        * and munge the <DOT> so we just get a . when we have normal text that
        * looks like a ref.ident
        */

       matchedToken.image = ".";

       if ( debugPrint )
           System.out.print("DOT : switching to " + REFMODIFIER);
       SwitchTo(REFMODIFIER);

   }
|   <LCURLY: "{">
|   <RCURLY: "}">
   {
       stateStackPop();
   }
}

Lines 1406-1415:
void Map() : {}
{
   <LEFT_CURLEY>
   (
     LOOKAHEAD(2) Parameter() <COLON> Parameter() (<COMMA> Parameter()
<COLON> Parameter() )*
     |
     [ <WHITESPACE> ]
    )
    <RIGHT_CURLEY>
}

Lines 1468-1485:
void Reference() : {}
{
   /*
    *  A reference is either ${<FOO>} or  $<FOO>
    */

     (
        <IDENTIFIER>
        (LOOKAHEAD(2) <DOT> (LOOKAHEAD(3) Method() | Identifier() ))*
     )
     |
     (
        <LCURLY>
        <IDENTIFIER>
        (LOOKAHEAD(2) <DOT> (LOOKAHEAD(3) Method() | Identifier() ))*
        <RCURLY>
     )
}

Lines 1442-1456:
void Parameter() #void: {}
{
   [<WHITESPACE>]
   (
       StringLiteral()
       | LOOKAHEAD(  <LBRACKET> [<WHITESPACE>]    ( Reference() |
NumberLiteral())     [<WHITESPACE>] <DOUBLEDOT> ) IntegerRange()
       | Map()
       | ObjectArray()
       | True()
       | False()
       | Reference()
       | NumberLiteral()
       )
   [ <WHITESPACE>]
}

Best regards,
-- Shinobu

--
Shinobu "Kawai" Yoshida <sh...@gmail.com>

---------------------------------------------------------------------
To unsubscribe, e-mail: velocity-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: velocity-user-help@jakarta.apache.org


Re: how to pass a Map to a method

Posted by Randal Walser <ra...@comcast.net>.
At 10:46 PM 1/14/2005 +0900, you wrote:
>I'd say it should.  Maps are features of the upcoming 1.5, which is
>still under development.  Maybe it's a bug, maybe it's simply not
>implemented yet.  You could file a bugzilla issue so the developers
>get reminded.  Better yet, you can submit a patch to give the
>suggested behaviour!  ;)

There have been no changes to the grammar in over a year (anywhere in
the runtime/parser directory, anyway).  Apparently, shoring up the
language implementation hasn't been a priority for a while, unless the
developers aren't committing language changes to the repository.  I've
noticed some other "curiosities" in the language, as well, so maybe
I'll dig deeper into it myself when I get a chance.  I'll submit a
patch if I come up with anything useful.

Thanks,

Randal


---------------------------------------------------------------------
To unsubscribe, e-mail: velocity-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: velocity-user-help@jakarta.apache.org


RE: sa-update problem after upgrading from Plesk Spamassassin 3.0.4 to SA 3.1.7....

Posted by Florent Gilain <fl...@direct-energie.com>.
Thanks a lot, it now works.

Florent

-----Message d'origine-----
De : Theo Van Dinter [mailto:felicity@apache.org] 
Envoyé : mardi 23 janvier 2007 01:27
À : users@spamassassin.apache.org
Objet : Re: sa-update problem after upgrading from Plesk Spamassassin 3.0.4
to SA 3.1.7....

On Tue, Jan 23, 2007 at 01:16:33AM +0100, Florent Gilain wrote:
> SpamAssassin seems to work; but few tools not (sa-update for example).
> 
> [root@mx2 spamassassin]# sa-update
> 
> Can't locate Archive/Tar.pm in @INC (@INC contains:

You need to install the modules listed in the INSTALL doc as required for
sa-update.

--
Randomly Selected Tagline:
"Ever notice when a house burns down, the only thing left is the fireplace
 and the chimney?"               - Bob Lazarus


Re: update hook per project (or directory)

Posted by Ryan Schmidt <su...@ryandesign.com>.
On Mar 17, 2007, at 07:25, Jan Hendrik wrote:

>> Anything you do just in the post-commit hook, or any script the post-
>> commit hook calls, or any script called by such a script, and so
>> forth, is fine.
>>
>> However, if you fork off a new process (create a thread, whatever),
>> then that is a separate process and the post-commit hook itself can
>> end before the forked process does, and that's when problems as
>> described above can start.
>>
>> So, by (Unix) example (since I don't know Windows):
>>
>> If my post-commit hook is...
>>
>>
>> #!/bin/sh
>> REPOS="$1"
>> REV="$2"
>> /path/to/some-other-script.sh "$REPOS" "$REV"
>>
>>
>> ....then everything is fine, because post-commit will wait for some-
>> other-script.sh to finish before it finishes.
>>
>> However, if my post-commit hook is...
>>
>>
>> #!/bin/sh
>> REPOS="$1"
>> REV="$2"
>> /path/to/some-other-script.sh "$REPOS" "$REV" >/dev/null 2>/dev/ 
>> null &
>>
>>
>> ....then some-other-script.sh has been forked off into its own
>> process, and post-commit ends immediately, before some-other-
>> script.sh is done running, which can cause the possible problems as
>> mentioned above.
>
> I am not very versed with pipes, but if the output/result/whatever of
> a script/program is directed elsewhere (>/dev/null ...) then this is a
> fork-off or new thread and the calling script would not wait for  
> stuff?

[snip]

It's not the redirecting of stdout and stderr; it's the "&" character  
at the end that causes a new process to be forked. However,  
Subversion has recently acquired a new "feature" whereby if you do  
not also redirect stdout and stderr somewhere, Subversion will still  
wait for that forked process to end before returning from the hook  
script.


-- 

To reply to the mailing list, please use your mailer's Reply To All  
function


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Timestamp Frustrations

Posted by Roel Harbers <ro...@roelharbers.com>.
trlists@clayst.com wrote:
>     - I manage the transfer of all kinds of other files between
>     machines with a single script that uses timestamps.  This approach
>     would require using svn for all the files under version control, so
>     it doubles the complexity of updating.

If the files are important enough to transfer between machines, why not 
just add them to version control?

>     - Using svn update requires a commit on one machine before
>     updating on the other.  I switch back and forth between machines
>     sometimes 2 or 3 times a day, and often I'm not ready to commit
>     the work when I just happen to need to switch machines.  IOW the
>     version control cycle and the between-machines update cycle are
>     poorly matched. . 

So your own script just copies half completed changes between machines? 
If this is a workable situation for you, I don't see any problem with 
committing these half completed changes to svn.

> 
>     - Using svn handles only the files under version control.  How do
>     I also handle unversioned files in the directories that are under
>     version control.

Again, if it's important that they are present on these other machines, 
why not just add them to svn?

>     - There are other things I do with timestamps -- for example
>     understanding if two files were changed at about the same time or
>     not, looking at which files I need to upload to deploy the changes
>     to my live site, etc.  The way svn manages the timestamps makes
>     this difficult. 

You could use the svn log or svn diff output to see what changed. You 
could also use a scripted svn export to keep the site up to date, 
although that may not be a good idea when committing unfinished changes.

> So it is a lot more than just saying "use svn update".  It could be 
> done, but it adds a lot of complexity to what is currently a relatively 
> simple process.
> 
> Any other ideas? :-)

AFAIKT, the complexity stems from the combination of your script and 
svn. I'd try to get rid of the script altogether. Basically, what you 
are doing now is using *two* version control systems at the same time, 
svn and your own script, which means, indeed, a lot of unneeded complexity.

Regards,

Roel Harbers


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: [OT]Delete/Undelete modified working copy

Posted by Roel Harbers <ro...@roelharbers.nl>.
Jan Hendrik wrote:

> Concerning Re: Delete/Undelete modified workin
> Jeremy Pereira wrote on 28 May 2004, 12:34, at least in part:
>>
>>I can't figure out why you find recycle bin management a problem.  To
>>empty it, right click and select "empty" from the context menu.  How
>>hard is that?  It has a setting so that it will only use a certain
>>percentage of the drive (default 10%).  I have never, worried about
>>it. 
> 
> 
> It's hard enough to have to minimize or close all apps to get to the 
> desktop for the bin.

[Windows key]-M

>In the filemanager it's pretty useless with 
> names like [X-123-DSA-etc...] (W2K SP2). One has to guess and 
> review everything in a file viewer to find the right stuff.

Try the "Recycled" dir, or the "Recycle Bin" at the bottom, not the 
hidden RECYCLER directory (which is basically the "backend" for the 
Recycle bin)

>>  Stuff goes in the bin and then goes away automatically when the
>>  newer 
>>stuff in it equals 10% of the drive.
> 
> 
> ??  You're joking, aren't you?  Haven't seen much of automatically, 
> at least not when it was introduced in W95 - Can't delete 'cause 
> the bin is full or so ...

1995 is *9* years ago. In 2k and XP, it works.

>>>While there are lots of undelete tools there are none that could
>>>sensibly empty the bin.
>>
>>There is right click on the bin and select empty - sounds extremely
>>sensible to me and also size gets managed automatically, also pretty
>>sensible.
> 
> 
> This deletes *anything* in the bin, doesn't it?  Can't see anything 
> sensible in this.  IMHO sensible is that stuff is selected.  The either 
> all or nothing approach is just like keep all or delete right away 
> from the start.

You can just browse the Recycle Bin and delete or restore any files you 
like.

I'm not trying to defend the "svn should use the recycle bin" position 
(it's a command line tool, it shouldn't use the recycle bin, just like 
del doesn't use it), I'm just pointing out some misconceptions about the 
recycle bin.

Regards,

Roel Harbers


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Delete/Undelete modified working copy

Posted by Jan Hendrik <ja...@bigfoot.com>.
Concerning Re: Delete/Undelete modified workin
Jeremy Pereira wrote on 28 May 2004, 12:34, at least in part:

> On May 28, 2004, at 12:07, Jan Hendrik wrote:
> 
> > Concerning Re: Delete/Undelete modified workin
> > Brian Mathis wrote on 27 May 2004, 12:28, at least in part:
> >>
> >> Jan Hendrik wrote:
> >
> > Sorry, but beside the point.  To my recycle bin setting SVN should
> > fill the bin.  It doesn't.  Just as John wrote.  So any change of
> > this behaviour would most likely result in SVN still ignoring my (or
> > your or anyone's) setting of the bin,
> 
> No.  Windows GUI tools should be programmed to respect the windows

You say it: GUI tools.  SVN is not GUI.

> > but simply filling it up.  And make
> > me dealing with deleted stuff twice, no matter of my settings.
> 
> I can't figure out why you find recycle bin management a problem.  To
> empty it, right click and select "empty" from the context menu.  How
> hard is that?  It has a setting so that it will only use a certain
> percentage of the drive (default 10%).  I have never, worried about
> it. 

It's hard enough to have to minimize or close all apps to get to the 
desktop for the bin.  In the filemanager it's pretty useless with 
names like [X-123-DSA-etc...] (W2K SP2). One has to guess and 
review everything in a file viewer to find the right stuff.

>   Stuff goes in the bin and then goes away automatically when the
>   newer 
> stuff in it equals 10% of the drive.

??  You're joking, aren't you?  Haven't seen much of automatically, 
at least not when it was introduced in W95 - Can't delete 'cause 
the bin is full or so ...

> > While there are lots of undelete tools there are none that could
> > sensibly empty the bin.
> 
> There is right click on the bin and select empty - sounds extremely
> sensible to me and also size gets managed automatically, also pretty
> sensible.

This deletes *anything* in the bin, doesn't it?  Can't see anything 
sensible in this.  IMHO sensible is that stuff is selected.  The either 
all or nothing approach is just like keep all or delete right away 
from the start.

> > So for providing for the accidental delete
> > you would rather force anyone to put in additional work for
> > reviewing the bin.
> 
> This is BS.  You don't need to review the bin.
> >
> > BTW as John correctly mentioned commandline behaviour I have
> > heard of tools that intercept CL deletes and put them into the bin.
> > Maybe this would help you.
> 
> OK.  All command line tools on Windows, Unix or Mac OS X are the same.
>  Correct consistent behaviour is arguably that the file should
> disappear altogether for all of these platforms.  I might as well
> complain that rm doesn't put stuff in the trash on my Mac.
> 
> TortoiseSVN as a GUI tool and one that claims to integrate with
> Windows Explorer should respect recycle bin settings.

It's an extension using SVN under the hood as far as I know and 
understand.  So it cannot do what SVN not provides for.

Don't think this is on topic anymore though, so from my side it's 
the last posting on this.

Have a fine Whitsun weekend!

Jan Hendrik

---------------------------------------
Freedom quote:

     The price of freedom is eternal vigilance.
                --  Thomas Jefferson


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Delete/Undelete modified working copy

Posted by Jeremy Pereira <je...@ntlworld.com>.
On May 28, 2004, at 12:07, Jan Hendrik wrote:

> Concerning Re: Delete/Undelete modified workin
> Brian Mathis wrote on 27 May 2004, 12:28, at least in part:
>
>>
>>
>> Jan Hendrik wrote:
>>
>
> Sorry, but beside the point.  To my recycle bin setting SVN should
> fill the bin.  It doesn't.  Just as John wrote.  So any change of this
> behaviour would most likely result in SVN still ignoring my (or your
> or anyone's) setting of the bin,

No.  Windows GUI tools should be programmed to respect the windows 
settings of the user.  If the user checks the "don't use recycle bin" 
option, all apps that pretend to be Windows apps should obey that.

> but simply filling it up.  And make
> me dealing with deleted stuff twice, no matter of my settings.

I can't figure out why you find recycle bin management a problem.  To 
empty it, right click and select "empty" from the context menu.  How 
hard is that?  It has a setting so that it will only use a certain 
percentage of the drive (default 10%).  I have never, worried about it. 
  Stuff goes in the bin and then goes away automatically when the newer 
stuff in it equals 10% of the drive.

>
> While there are lots of undelete tools there are none that could
> sensibly empty the bin.

There is right click on the bin and select empty - sounds extremely 
sensible to me and also size gets managed automatically, also pretty 
sensible.

> So for providing for the accidental delete
> you would rather force anyone to put in additional work for reviewing
> the bin.

This is BS.  You don't need to review the bin.
>
> BTW as John correctly mentioned commandline behaviour I have
> heard of tools that intercept CL deletes and put them into the bin.
> Maybe this would help you.

OK.  All command line tools on Windows, Unix or Mac OS X are the same.  
Correct consistent behaviour is arguably that the file should disappear 
altogether for all of these platforms.  I might as well complain that 
rm doesn't put stuff in the trash on my Mac.

TortoiseSVN as a GUI tool and one that claims to integrate with Windows 
Explorer should respect recycle bin settings.


>
> Jan Hendrik
>
>
> ---------------------------------------
> Freedom quote:
>
>      If ye love wealth better than liberty,
>      the tranquility of servitude
>      better than the animating contest of freedom,
>      go home from us in peace.
>      We ask not your counsels or arms.
>      Crouch down and lick the hands which feed you.
>      May your chains set lightly upon you,
>      and may posterity forget that ye were our countrymen.
>                 --  Sam Adams
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
> For additional commands, e-mail: users-help@subversion.tigris.org
>
>
>
--
Jeremy Pereira                             Tel: +44 (0)1252 401035
Senior Consultant                       Mobile: +44 (0)7884 265457
Axcelia Ltd                                Fax: +44 (0)1252 336934
http://www.axcelia.com           mailto:jeremy.pereira@axcelia.com


______________________________________________________________________
This email has been scanned by the MessageLabs Email Security System.
For more information please visit http://www.messagelabs.com/email 
______________________________________________________________________

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Integrating Tomcat 5 and Apache 2

Posted by Stewart Walker <sw...@caspercollege.edu>.
What about this section below in catalina.out

INFO: Starting Coyote HTTP/1.1 on http-8080
Sep 28, 2004 7:55:24 PM org.apache.jk.server.JkMain start
********** yik !!
INFO: APR not loaded, disabling jni components: java.io.IOException:
java.lang.UnsatisfiedLinkError: /usr/lib/httpd/modules/jkjni.so:
/usr/lib/httpd/modules/jkjni.so: undefined symbol: apr_md5_final
********************8
Sep 28, 2004 7:55:24 PM org.apache.jk.common.ChannelSocket init
INFO: JK2: ajp13 listening on /0.0.0.0:8009
Sep 28, 2004 7:55:24 PM org.apache.jk.server.JkMain start
INFO: Jk running ID=0 time=1/50 
config=/usr/java/tomcat-5.0.27/conf/jk2.properties
Sep 28, 2004 7:55:24 PM org.apache.catalina.startup.Catalina start
INFO: Server startup in 3409 ms
Sep 28, 2004 7:57:50 PM org.apache.coyote.http11.Http11Protocol pause

do I have a problem with the libs?

There is  localhost_log.2004-09-28.txt in the logs directory along with
catalina.out.

[root@register logs]# cat *.txt
2004-09-28 19:55:24
StandardContext[/balancer]org.apache.webapp.balancer.BalancerFilter:
init(): ruleChain: [org.apache.webapp.balancer.RuleChain:
[org.apache.webapp.balancer.rules.URLStringMatchRule: Target string: News
/ Redirect URL: http://www.cnn.com],
[org.apache.webapp.balancer.rules.RequestParameterRule: Target param name:
paramName / Target param value: paramValue / Redirect URL:
http://www.yahoo.com],
[org.apache.webapp.balancer.rules.AcceptEverythingRule: Redirect URL:
http://jakarta.apache.org]]
2004-09-28 19:55:24 StandardContext[/jsp-examples]ContextListener:
contextInitialized()
2004-09-28 19:55:24 StandardContext[/jsp-examples]SessionListener:
contextInitialized()
2004-09-28 19:55:24 StandardContext[/servlets-examples]ContextListener:
contextInitialized()
2004-09-28 19:55:24 StandardContext[/servlets-examples]SessionListener:
contextInitialized()
2004-09-28 19:56:24 StandardContext[/jsp-examples]SessionListener:
sessionDestroyed('8F92E42A3C2D558F075965E4A85372CC')
2004-09-28 19:57:52 StandardContext[/servlets-examples]SessionListener:
contextDestroyed()
2004-09-28 19:57:52 StandardContext[/servlets-examples]ContextListener:
contextDestroyed()
2004-09-28 19:57:52 StandardContext[/jsp-examples]SessionListener:
contextDestroyed()
2004-09-28 19:57:52 StandardContext[/jsp-examples]ContextListener:
contextDestroyed()

> Stewart,
>
> I think it will be in catalina.2004-09-28.log by default. You'll need to
> try the request again as this valve dumps the request details.
>
> PJ
>
> On Wed, 2004-09-29 at 09:45, Stewart Walker wrote:
>> Here is a snip of the catalina.out after enabling
>> RequestDumperValve
>>
>> catalina.out
>>
>> Sep 28, 2004 5:30:59 PM
>> org.apache.catalina.core.StandardHostDeployer install
>> INFO: Installing web application at context path /webdav from URL
>> file:/usr/java
>> /tomcat-5.0.27/webapps/webdav
>> Sep 28, 2004 5:30:59 PM org.apache.coyote.http11.Http11Protocol
>> start
>> INFO: Starting Coyote HTTP/1.1 on http-8080
>> Sep 28, 2004 5:30:59 PM org.apache.jk.server.JkMain start
>> INFO: APR not loaded, disabling jni components:
>> java.io.IOException: java.lang.UnsatisfiedLinkError:
>> /usr/lib/httpd/modules/jkjni.so: /usr/lib/httpd/modules/jkjni.so:
>> undefined symbol: apr_md5_final
>> Sep 28, 2004 5:30:59 PM org.apache.jk.common.ChannelSocket
>> init
>> INFO: JK2: ajp13 listening on /0.0.0.0:8009
>> Sep 28, 2004 5:30:59 PM org.apache.jk.server.JkMain start
>> INFO: Jk running ID=0 time=1/50  config=/usr/java/tomcat-
>> 5.0.27/conf/jk2.propert
>> ies
>> Sep 28, 2004 5:30:59 PM org.apache.catalina.startup.Catalina start
>> INFO: Server startup in 3464 ms
>> Sep 28, 2004 5:32:22 PM org.apache.coyote.http11.Http11Protocol
>> pause
>> INFO: Pausing Coyote HTTP/1.1 on http-8080
>> Sep 28, 2004 5:32:23 PM
>> org.apache.catalina.core.StandardService stop
>> INFO: Stopping service Catalina
>> Sep 28, 2004 5:32:23 PM
>> org.apache.catalina.core.StandardHostDeployer remove
>> INFO: Removing web application at context path /admin
>> Sep 28, 2004 5:32:23 PM org.apache.catalina.logger.LoggerBase
>> stop
>> INFO: unregistering logger
>> Catalina:type=Logger,path=/admin,host=localhost
>>
>>
>> On 29 Sep 2004 at 9:16, Peter Johnson wrote:
>>
>> > Stewart,
>> >
>> > Try enabling the RequestDumperValve in server.xml. I think you'll find
>> > it has something to do with the difference between
>> > "com.datatel.server.servlets.webadvisor.WebAdvisor" and
>> > "datatel/openweb" ... well that is my first thought anyway.
>> >
>> > PJ
>> >
>> > On Wed, 2004-09-29 at 08:28, Stewart Walker wrote:
>> > > Redhat Linux Enterprise
>> > > httpd-2.0.46-40.ent
>> > > j2sdk1.4.2_05
>> > > tomcat-5.0.27
>> > > jakarta-tomcat-connectors-jk2-2.0.4-src
>> > >
>> > > For the record.. If I run the servlets
>> > >
>> > > http://server.edu:8080/servlet/com.datatel.server.servlets.webadvis
>> > > or.WebAdvisor?ACTION=Login they work fine.
>> > >
>> > >
>> > > Going over the
>> > > Chapter 8. Integrating Tomcat 5 and Apache 2 at
>> > > http://cymulacrum.net/writings/tomcat5/c875.html
>> > > Everthing (./confgure, make & ldd ) went fine.
>> > > Found that the
>> > > $CATALINA/logs/jk2.shm and the jk2.socket files.
>> > > For some reason arn't being created when tomcat starts.
>> > >
>> > > Tomcat isn't complaining about anything as far as I can tell.
>> > >
>> > > 2004-09-28 15:53:34 StandardContext[]WebAdvisor: Initializing
>> > > WebAdvisorContext
>> > > 2004-09-28 15:53:34 StandardContext[]WebAdvisor: No cache
>> > > found, creating new session cache.
>> > > 2004-09-28 15:57:09 StandardContext[/servlets-
>> > > examples]InvokerFilter(ApplicationFilterConfig[name=Path Mapped
>> > > Filter, filterClass=filters.ExampleFilter]): 4 milliseconds
>> > >
>> > > Tomcat starts and stops and serves the pages when directed as
>> > > above.
>> > >
>> > > Went back thru
>> > > Chapter 8. Integrating Tomcat 5 and Apache 2
>> > > Appendix A. mod_jk2 404 Error Problem
>> > > Appendix C. Building mod_jk2 on Red Hat Enterprise Linux 3
>> > > (RHEL)
>> > > double checked everything.
>> > >
>> > > Went ahead and started httpd and tried the uri setting
>> > >
>> > > # Uri mapping for datatel
>> > > [uri:/datatel/openweb/*]
>> > >
>> > > in workers2.properties and
>> > > got the 404 error.
>> > >
>> > > Commented out the uri settings there and put
>> > >
>> > > <Location "/datatel/openweb/*">
>>
>  > > JkUriSet worker ajp13
>> > > </Location>
>> > >
>> > > in etc/httpd/conf/httpd.conf
>> > >
>> > > Gave it a shot and recieved
>> > >
>> > > 500 Internal Server Error
>> > > (httpd error log logged )
>> > > [Tue Sep 28 15:45:36 2004] [error] uriEnv.init() map to invalid
>> > > worker /datatel/openweb/*-0 ajp13 and [Tue Sep 28 15:57:32 2004]
>> > > [error] mod_jk2.handle() No worker for /datatel/openweb/index.html
>> > > When I tried index.html
>> > >
>> > > I'm almost there was hoping some one would have an idea on what is
>> > > going on
>> > >
>> > > Thanks
>> > >
>> > >                  \\|//
>> > >               -(@ @)-
>> > > ===oOO==(_)==OOo======================
>> > >
>> > > Stewart Walker
>> > > swalker@caspercollege.edu
>> > >
>> >
>> >
>> > ---------------------------------------------------------------------
>> > To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org For
>> > additional commands, e-mail: tomcat-user-help@jakarta.apache.org
>> >
>>
>>
>>                  \\|//
>>               -(@ @)-
>> ===oOO==(_)==OOo======================
>>
>> Stewart Walker
>> swalker@caspercollege.edu
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
>> For additional commands, e-mail: tomcat-user-help@jakarta.apache.org
>>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tomcat-user-help@jakarta.apache.org
>


-- 
       \\|//
      -(@ @)-
===oOO==(_)==OOo===============
swalker@caspercollege.edu

---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-user-help@jakarta.apache.org


Re: Unix missing fd 0..2, Win32 service missing stdin/out/err handles

Posted by "William A. Rowe, Jr." <wr...@rowe-clan.net>.
At 08:18 PM 4/13/2002, Jon Travis wrote:
>On Sat, Apr 13, 2002 at 01:20:25PM -0500, William A. Rowe, Jr. wrote:
> > Because third party libraries have a nasty habit of dropping messages out
> > to stderr or stdout... and will sometimes even poll stdin (think 
> passphrases
> > or other bits in encryption libraries, etc) ... it is rather dangerous for
> > -our- applications to ever use fd 0..2.  Sure, you can chalk it up to a 
> bug in
> > the caller, but imagine if 'by chance' we open up an sdbm as fd 2.  Another
> > library prints something to stderr and bang ... database is corrupted.

> > So, on the unix side, within apr_app_initialize, I suggest calling
> > fopen("/dev/null", ) until it returns an fd >2.  On the Win32 side, I 
> suggest
> > calling GetStandardHandle() and filling in any missing stdhandles with
> > the appropriate FILE*'s fd's handle after opening 'NUL' in the clib, so all
> > three bases are covered.  If we end up with an fd >2, then we immediately
> > close that last /dev/null file and go on.
> >
> > Does this make good sense?
>
>Not particularly.
>
>The operating system pre-allocates those fd's (0..2 for Unix) -- why would
>opening an SDBM ever return any of those file descriptors?  The only way
>would be if the consumer closed those handles beforehand.  If the user
>does something like that, their program is broken -- we shouldn't try
>to work around that.

That's the sort of feedback I was looking for.  Thanks.  Yes - if it succeeds
in opening fd 0..2 then the caller fooed up.  And if we don't care to protect
against that case, I can go along with that.  However...

> > On the Win32 side, same goes for FILE *'s stdin, stderr and stdout, for
> > the low level 'fd's 0..2 (not really fd's as unix knows them, but the 
> clib's
> > table of Win32 handles), and the Win32 standard handles.  Win32 services
> > have -no- STD handles, even when they are command line apps.

This is the operating system's responsibility - and it won't happen on Win32
services.  So in the Win32 case, I see this as required.

Bill



Re: [CLI] Breaking CocoonBean Interface

Posted by Vadim Gritsenko <va...@verizon.net>.
Upayavira wrote:

>>>It is only mentioned along with the <dest-dir> node in the cli.xconf.
>>>If I removed that option in the xconf file, setDestDir would go
>>>easily.
>>>
>>>      
>>>
>>No; you don't need to remove the option, and you don't need to change
>>config file at all. Do you want me to show you how? :)
>>    
>>
>
>Go for it!
>

Done. Please verify that all is all right. "build docs" still works 
which is a good sign ;-)

Vadim



Re: [CLI] Breaking CocoonBean Interface

Posted by Upayavira <uv...@upaya.co.uk>.
> >It is only mentioned along with the <dest-dir> node in the cli.xconf.
> >If I removed that option in the xconf file, setDestDir would go
> >easily.
> >
> 
> No; you don't need to remove the option, and you don't need to change
> config file at all. Do you want me to show you how? :)

Go for it!

Upayavira

Re: [CLI] Breaking CocoonBean Interface

Posted by Vadim Gritsenko <va...@verizon.net>.
Upayavira wrote:

>Vadim,
>
>  
>
>>>>>It is particularly the removal of the Destination interface that
>>>>>breaks the interface. However, removing the setDestDir will
>>>>>certainly make a cleaner interface. I'll do that.
>>>>>          
>>>>>
>
>  
>
>>>I didn't remove it in the end, because it would have taken quite a
>>>bit of plumbing to work around its non-presence (in order to at least
>>>keep the CLI interface the same). 
>>>      
>>>
>
>  
>
>>Hm... I found only one place to be changed in Main.java. Line 481,
>>need to pass destDir. Otherwise, there is no major changes in CLI.
>>    
>>
>
>It is only mentioned along with the <dest-dir> node in the cli.xconf. If 
>I removed that option in the xconf file, setDestDir would go easily.
>

No; you don't need to remove the option, and you don't need to change 
config file at all. Do you want me to show you how? :)

Vadim



Re: [CLI] Breaking CocoonBean Interface

Posted by Upayavira <uv...@upaya.co.uk>.
Vadim,

> >>>It is particularly the removal of the Destination interface that
> >>>breaks the interface. However, removing the setDestDir will
> >>>certainly make a cleaner interface. I'll do that.

> >I didn't remove it in the end, because it would have taken quite a
> >bit of plumbing to work around its non-presence (in order to at least
> >keep the CLI interface the same). 

> Hm... I found only one place to be changed in Main.java. Line 481,
> need to pass destDir. Otherwise, there is no major changes in CLI.

It is only mentioned along with the <dest-dir> node in the cli.xconf. If 
I removed that option in the xconf file, setDestDir would go easily.

However, that would require the user to specify the destination for 
every uri in the xconf file. Is that okay, or should I provide another
way to provide a catch all destination, e.g:

<uris dest="build/dest">
  <uri src="xxxx"/>
</uris>

> And one more thing... CocoonBean.java, line 166: error text does not
> make much sense in Bean context (what is "-d"? :), this text is
> appropriate in Main.java. 

Yup. Should be an exception and Main reports it.

> And System.exit should be replaced with
> exception, I guess you already know this.

Eventually. 

Thanks, Upayavira


Re: [CLI] Breaking CocoonBean Interface

Posted by Vadim Gritsenko <va...@verizon.net>.
Upayavira wrote:

>On 4 Jun 2003 at 23:22, Vadim Gritsenko wrote:
>
>  
>
>>>It is particularly the removal of the Destination interface that
>>>breaks the interface. However, removing the setDestDir will certainly
>>>make a cleaner interface. I'll do that.
>>>      
>>>
>>Thanks. Sorry for me being slow.
>>    
>>
>
>Slow to reply, or slow to understand? ;-)
>
>I didn't remove it in the end, because it would have taken quite a bit of plumbing to 
>work around its non-presence (in order to at least keep the CLI interface the same). 
>

Hm... I found only one place to be changed in Main.java. Line 481, need 
to pass destDir. Otherwise, there is no major changes in CLI.


>Perhaps I should deprecate it and remove it later?
>

It's not the best idea because this class was just recently created. 
It's better either to leave it alone or remove.

And one more thing... CocoonBean.java, line 166: error text does not 
make much sense in Bean context (what is "-d"? :), this text is 
appropriate in Main.java. And System.exit should be replaced with 
exception, I guess you already know this.

Vadim



Re: [CLI] Breaking CocoonBean Interface

Posted by Upayavira <uv...@upaya.co.uk>.
On 4 Jun 2003 at 23:22, Vadim Gritsenko wrote:

> >It is particularly the removal of the Destination interface that
> >breaks the interface. However, removing the setDestDir will certainly
> >make a cleaner interface. I'll do that.
> 
> Thanks. Sorry for me being slow.

Slow to reply, or slow to understand? ;-)

I didn't remove it in the end, because it would have taken quite a bit of plumbing to 
work around its non-presence (in order to at least keep the CLI interface the same). 
Perhaps I should deprecate it and remove it later?

Thanks again.

Upayavira


Re: [CLI] Breaking CocoonBean Interface

Posted by Vadim Gritsenko <va...@verizon.net>.
Upayavira wrote:
...

>It is particularly the removal of the Destination interface that breaks the interface. 
>However, removing the setDestDir will certainly make a cleaner interface. I'll do that.
>  
>

Thanks. Sorry for me being slow.

Vadim




Re: [proposal] a new kind of 'dist'

Posted by Geoff Howard <co...@leverageweb.com>.
At 09:28 AM 3/24/2003, Stefano wrote

<snip/>

>Am I the only one who heard complains about cocoon being very cool but too 
>hard to 'tune down' to simpler needs?
>
>I'm asking because I'm starting to wonder if this is the case.
>
><puzzled/>
>
>Stefano.

No, you're not crazy.  This has been a theme for a while.  In fact, it was 
one of the first questions I had when I started.  It's a clear need IMHO.

Geoff 


Re: [proposal] a new kind of 'dist'

Posted by Vadim Gritsenko <va...@verizon.net>.
Stefano Mazzocchi wrote:

> Am I the only one who heard complains about cocoon being very cool but 
> too hard to 'tune down' to simpler needs?
>
> I'm asking because I'm starting to wonder if this is the case.
>
> <puzzled/>


It's still 116 mails to go... But no, you are not alone.

Vadim



Re: [proposal] a new kind of 'dist'

Posted by Geoff Howard <co...@leverageweb.com>.
At 10:41 AM 3/24/2003, Stefano wrote:
>Steven Noels wrote:
>>On 24/03/2003 15:28 Stefano Mazzocchi wrote:
>>
>>>>I for one think the 'copy cocoon.war into webapps dir and look at 
>>>>http://server:port/cocoon/' paradigm helps a lot of newcoming users up 
>>>>& running very fast.
>>>
>>>
>>>
>>>please, define 'be up and running'.
>>
>>Not knowing about javac, ant, and whatelse, just doing a simple filecopy, 
>>and still be greeted with 'welcome to cocoon'. There's a lot of apps who 
>>get distributed that way.
>
>Granted. Unfortunately, cocoon is not an app.
>
>Therefore, 'up and running' doesn't mean 'being able to run it', but 
>'being able to have my stuff run inside of it'.


Well, there are two important things that need to be easy.  One is taking 
cocoon for a test drive by checking out the samples, and the other is 
'being able to have my stuff run inside of it'.  The first has been easy at 
the expense of the second, at least if you want to get rid of what you 
don't need.  I think everyone agrees that both should be easy, and it 
sounds like that will be accomplished.

Geoff 


Re: [proposal] a new kind of 'dist'

Posted by Stefano Mazzocchi <st...@apache.org>.
Steven Noels wrote:
> On 24/03/2003 15:28 Stefano Mazzocchi wrote:
> 
>>> I for one think the 'copy cocoon.war into webapps dir and look at 
>>> http://server:port/cocoon/' paradigm helps a lot of newcoming users 
>>> up & running very fast. 
>>
>>
>>
>> please, define 'be up and running'.
> 
> 
> Not knowing about javac, ant, and whatelse, just doing a simple 
> filecopy, and still be greeted with 'welcome to cocoon'. There's a lot 
> of apps who get distributed that way.

Granted. Unfortunately, cocoon is not an app.

Therefore, 'up and running' doesn't mean 'being able to run it', but 
'being able to have my stuff run inside of it'.

We are using a distribution paradign used for webapps and I think this 
is harming us more than it's helping us.


>> I'm asking because I'm starting to wonder if this is the case.
> 
> No Stefano, you should not wonder. However, we are all projecting 
> comments we hear in our private hemisphere as the 'overall 
> appreciation', much of it being based on our own assumptions however.

True, this is why I've asked to work incrementally, starting with a 
build-based distribution.

> Then again, responding against user complaints is easier than inventing 
> from scratch a new build/dist system which makes both our and the user's 
> life easier. So let's get over with this, and see what the users think. 
> They've been kept out of the loop for too long.

All right. Deal.

Stefano.


Re: [proposal] a new kind of 'dist'

Posted by "J.D. Daniels" <jd...@datatrio.com>.
IMHO:

Cocoon is best suited to people who develop using other technologies - PHP
C++ Visual Basic etc, who have realized the limitations for end user
interaction.

Within a half hour, right now the way the system is, you can see what cocoon
does. It isn't an app, it is not a solution. It is a tool. A framework to
make your solutions available. It doesn't do the work for you. A web
designer can't start pumping out banking solutions. It does not need to be
'Dummied Up' .. which is where I think this thread has started to go. Yes,
it makes separation of concerns easy, and people are using it for web site
development to keep content away from design. but the power here is in the
application level.

The only problem is knowing enough about -how- cocoon works to strip it down
to what you need. So you will never make a binary to suit every need. Alot
of the people i have got interested in cocoon give up because they simply
don't get how to configure it. And I think this mostly from the
misconception that it is a webapp. Really, I wouldn't care if there was a
binary or not.... but changing the src dist so that it doesn't say
cocoon-***-src.tar would be the difference. call the binary 'examples' -
that is really what it is. You get the binary, see what it does.... peek in
at the files.... and after a week, hit the point of frustration that sends
you to the source. Which at this point doesn't really help unless you are
stubborn or really interested in learning new stuff. I think this whole
thing could be resolved with better INSTALL.txt and the addtion of a
COMPONENTS.txt - Something that told what the heck your options you -have- .

JD


RE: [proposal] a new kind of 'dist'

Posted by Matthew Langham <ml...@s-und-n.de>.
> life easier. So let's get over with this, and see what the users think.
> They've been kept out of the loop for too long.
>

How about _asking_ the users beforehand? A quick poll in the users list
would give some helpful feedback wouldn't it?

Matthew


Re: [proposal] a new kind of 'dist'

Posted by Steven Noels <st...@outerthought.org>.
On 24/03/2003 15:28 Stefano Mazzocchi wrote:

>> I for one think the 'copy cocoon.war into webapps dir and look at 
>> http://server:port/cocoon/' paradigm helps a lot of newcoming users up 
>> & running very fast. 
> 
> 
> please, define 'be up and running'.

Not knowing about javac, ant, and whatelse, just doing a simple 
filecopy, and still be greeted with 'welcome to cocoon'. There's a lot 
of apps who get distributed that way.

>> Even the Jetty path, as nice as it seems for us developers, might 
>> unfortunately not be of much use for them since their IT department 
>> wants them to deploy on XXX appserver anyhow.
> 
> 
> ./build.sh war

OK

> the difference between the above and a prebuild war is that the above 
> can be tuned for my needs with little effort (just modify the 
> blocks.properties) while the prebuild war requires cocoon gurus to 
> remove stuff because of all the inner dependencies and the thousands 
> jars we ship.

OK

>> And although in general, I agree on the principle of Cocoon really is 
>> a _framework_ for developers, reality tells me it is actively used by 
>> people who _don't_ want to program in order to do Java/XML-based 
>> websites.
> 
> 
> Am I the only one who heard complains about cocoon being very cool but 
> too hard to 'tune down' to simpler needs?

Nope, people have been asking for a blank webapp for over a year IIRC. 
Related to that, some of us have been advocating _against_ adding extra 
weight to Cocoon. How many of you are using these SAP R/3 components, 
now? ;-)

> I'm asking because I'm starting to wonder if this is the case.

No Stefano, you should not wonder. However, we are all projecting 
comments we hear in our private hemisphere as the 'overall 
appreciation', much of it being based on our own assumptions however.

Then again, responding against user complaints is easier than inventing 
from scratch a new build/dist system which makes both our and the user's 
life easier. So let's get over with this, and see what the users think. 
They've been kept out of the loop for too long.

</Steven>
-- 
Steven Noels                            http://outerthought.org/
Outerthought - Open Source, Java & XML Competence Support Center
Read my weblog at            http://blogs.cocoondev.org/stevenn/
stevenn at outerthought.org                stevenn at apache.org


Re: [proposal] a new kind of 'dist'

Posted by Stefano Mazzocchi <st...@apache.org>.
Steven Noels wrote:
> On 24/03/2003 14:01 Upayavira wrote:
> 
>> Great! Do others think this is worth doing?
> 
> 
> Possible, sure. Worth doing, I really don't know.

I won't vote against it but I won't help it making it happen.

> Related to this thread:
> 
> I for one think the 'copy cocoon.war into webapps dir and look at 
> http://server:port/cocoon/' paradigm helps a lot of newcoming users up & 
> running very fast. 

please, define 'be up and running'.

> Even the Jetty path, as nice as it seems for us 
> developers, might unfortunately not be of much use for them since their 
> IT department wants them to deploy on XXX appserver anyhow.

./build.sh war

the difference between the above and a prebuild war is that the above 
can be tuned for my needs with little effort (just modify the 
blocks.properties) while the prebuild war requires cocoon gurus to 
remove stuff because of all the inner dependencies and the thousands 
jars we ship.

> And although in general, I agree on the principle of Cocoon really is a 
> _framework_ for developers, reality tells me it is actively used by 
> people who _don't_ want to program in order to do Java/XML-based websites.

Am I the only one who heard complains about cocoon being very cool but 
too hard to 'tune down' to simpler needs?

I'm asking because I'm starting to wonder if this is the case.

<puzzled/>

Stefano.


Re: [proposal] a new kind of 'dist'

Posted by Steven Noels <st...@outerthought.org>.
On 24/03/2003 14:01 Upayavira wrote:

> Great! Do others think this is worth doing?

Possible, sure. Worth doing, I really don't know.

Related to this thread:

I for one think the 'copy cocoon.war into webapps dir and look at 
http://server:port/cocoon/' paradigm helps a lot of newcoming users up & 
running very fast. Even the Jetty path, as nice as it seems for us 
developers, might unfortunately not be of much use for them since their 
IT department wants them to deploy on XXX appserver anyhow.

And although in general, I agree on the principle of Cocoon really is a 
_framework_ for developers, reality tells me it is actively used by 
people who _don't_ want to program in order to do Java/XML-based websites.

</Steven>
-- 
Steven Noels                            http://outerthought.org/
Outerthought - Open Source, Java & XML Competence Support Center
Read my weblog at            http://blogs.cocoondev.org/stevenn/
stevenn at outerthought.org                stevenn at apache.org


Re: Re: Cannot load JDBC driver class

Posted by Shyly Amarasinghe <am...@dpw.com>.
Um, is TOMCAT supposed to be an environment variable?  Sorry for the typo - it should have been %CATALINA_HOME%.  fyi, right now my CLASSPATH includes c:\local\java\j2re1.4.1_02\lib\rt.jar;.;c:\local\jre\lib\tools.jar;c:\local\jre\lib\servlet.jar;

Everything else (sevlet, taglib) seems to work fine.  fyi, I set up the datasource using the tomcat administrator under Resources -> Datasources.  (I also tried editing server.xml manually to create it, but got an error there as well.)  Also worth noting, when I go through the administrator to Tomcat server -> Server -> Host -> Context (/para) -> Resources -> Datasources, I get an error message "org.apache.jasper.JasperException: Exception retrieving attribute 'driverClassName'" which is confusing since that attribute is defined in server.xml.  This error message isn't in any of the other webapps.  If I take the <resource-ref> code out of web.xml, that error message goes away too.

You're being very patient and helpful - thank you very much!

Here is webapps\para\web-inf\web.xml
<?xml version="1.0" encoding="ISO-8859-1"?>
<!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
    "http://java.sun.com/dtd/web-app_2_3.dtd">
<web-app>
     <display-name>paralegal</display-name>
     <description>Paralegal status</description>
     <servlet>
          <servlet-name>
               LawSchools
          </servlet-name>
          <servlet-class>
               LawSchools
          </servlet-class>
     </servlet>
    <servlet-mapping>
        <servlet-name>
            LawSchools
        </servlet-name>
        <url-pattern>
            /LawSchools
        </url-pattern>
    </servlet-mapping>
     <taglib>
          <taglib-uri>http://jakarta.apache.org/taglibs/application-1.0</taglib-uri>
          <taglib-location>/WEB-INF/c.tld</taglib-location>
     </taglib>
  <resource-ref>
      <description>My DB Connection</description>
      <res-ref-name>jdbc/mydb</res-ref-name>
      <res-type>javax.sql.DataSource</res-type>
      <res-auth>Container</res-auth>
  </resource-ref>
</web-app>
************************
And this is server.xml

<?xml version='1.0' encoding='utf-8'?>
<Server className="org.apache.catalina.core.StandardServer" debug="0" port="8005" shutdown="SHUTDOWN">
  <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" debug="0" jsr77Names="false"/>
  <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" debug="0"/>
  <GlobalNamingResources>
    <Environment name="simpleValue" override="true" type="java.lang.Integer" value="30"/>
    <Resource auth="Container" description="User database that can be updated and saved" name="UserDatabase" scope="Shareable" type="org.apache.catalina.UserDatabase"/>
    <Resource name="jdbc/profsysbackup" scope="Shareable" type="javax.sql.DataSource"/>
    <ResourceParams name="UserDatabase">
      <parameter>
        <name>factory</name>
        <value>org.apache.catalina.users.MemoryUserDatabaseFactory</value>
      </parameter>
      <parameter>
        <name>pathname</name>
        <value>conf/tomcat-users.xml</value>
      </parameter>
    </ResourceParams>
    <ResourceParams name="jdbc/mydb">
      <parameter>
        <name>maxWait</name>
        <value>5000</value>
      </parameter>
      <parameter>
        <name>maxActive</name>
        <value>2</value>
      </parameter>
      <parameter>
        <name>password</name>
        <value>xxx</value>
      </parameter>
      <parameter>
        <name>url</name>
        <value>jdbc:sybase:Tds:xxx:5000</value>
      </parameter>
      <parameter>
        <name>driverClassName</name>
        <value>com.sybase.jdbc2.jdbc.SybDriver</value>
      </parameter>
      <parameter>
        <name>maxIdle</name>
        <value>2</value>
      </parameter>
      <parameter>
        <name>username</name>
        <value>xxx</value>
      </parameter>
    </ResourceParams>
  </GlobalNamingResources>
  <Service className="org.apache.catalina.core.StandardService" debug="0" name="Tomcat-Standalone">
    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector" acceptCount="100" bufferSize="2048" compression="off" connectionLinger="-1" connectionTimeout="20000" debug="0" disableUploadTimeout="true" enableLookups="true" maxKeepAliveRequests="100" maxProcessors="75" minProcessors="5" port="8080" protocolHandlerClassName="org.apache.coyote.http11.Http11Protocol" proxyPort="0" redirectPort="8443" scheme="http" secure="false" tcpNoDelay="true" useURIValidationHack="false">
      <Factory className="org.apache.catalina.net.DefaultServerSocketFactory"/>
    </Connector>
    <Connector className="org.apache.coyote.tomcat4.CoyoteConnector" acceptCount="10" bufferSize="2048" compression="off" connectionLinger="-1" connectionTimeout="0" debug="0" disableUploadTimeout="false" enableLookups="true" maxKeepAliveRequests="100" maxProcessors="75" minProcessors="5" port="8009" protocolHandlerClassName="org.apache.jk.server.JkCoyoteHandler" proxyPort="0" redirectPort="8443" scheme="http" secure="false" tcpNoDelay="true" useURIValidationHack="false">
      <Factory className="org.apache.catalina.net.DefaultServerSocketFactory"/>
    </Connector>
    <Engine className="org.apache.catalina.core.StandardEngine" debug="0" defaultHost="localhost" mapperClass="org.apache.catalina.core.StandardEngineMapper" name="Standalone">
      <Host className="org.apache.catalina.core.StandardHost" appBase="webapps" autoDeploy="true" configClass="org.apache.catalina.startup.ContextConfig" contextClass="org.apache.catalina.core.StandardContext" debug="0" deployXML="true" errorReportValveClass="org.apache.catalina.valves.ErrorReportValve" liveDeploy="true" mapperClass="org.apache.catalina.core.StandardHostMapper" name="localhost" unpackWARs="true">
        <Context className="org.apache.catalina.core.StandardContext" cachingAllowed="true" charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true" crossContext="false" debug="0" displayName="Tomcat Administration Application" docBase="../server/webapps/admin" mapperClass="org.apache.catalina.core.StandardContextMapper" path="/admin" privileged="true" reloadable="false" swallowOutput="false" useNaming="true" wrapperClass="org.apache.catalina.core.StandardWrapper">
          <Logger className="org.apache.catalina.logger.FileLogger" debug="0" directory="logs" prefix="localhost_admin_log." suffix=".txt" timestamp="true" verbosity="1"/>
        </Context>
        <Context className="org.apache.catalina.core.StandardContext" cachingAllowed="true" charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true" crossContext="false" debug="0" displayName="Webdav Content Management" docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\webdav" mapperClass="org.apache.catalina.core.StandardContextMapper" path="/webdav" privileged="false" reloadable="false" swallowOutput="false" useNaming="true" wrapperClass="org.apache.catalina.core.StandardWrapper">
        </Context>
        <Context className="org.apache.catalina.core.StandardContext" cachingAllowed="true" charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true" crossContext="true" debug="0" displayName="paralegal" docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\para" mapperClass="org.apache.catalina.core.StandardContextMapper" path="/para" privileged="false" reloadable="false" swallowOutput="false" useNaming="true" wrapperClass="org.apache.catalina.core.StandardWrapper">
          <Resource auth="Container" description="DB Connection" name="jdbc/mydb" scope="Shareable" type="javax.sql.DataSource"/>
        </Context>
        <Context className="org.apache.catalina.core.StandardContext" cachingAllowed="true" charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true" crossContext="true" debug="0" displayName="Tomcat Examples" docBase="examples" mapperClass="org.apache.catalina.core.StandardContextMapper" path="/examples" privileged="false" reloadable="true" swallowOutput="false" useNaming="true" wrapperClass="org.apache.catalina.core.StandardWrapper">
          <Logger className="org.apache.catalina.logger.FileLogger" debug="0" directory="logs" prefix="localhost_examples_log." suffix=".txt" timestamp="true" verbosity="1"/>
          <Parameter name="context.param.name" override="false" value="context.param.value"/>
          <Ejb home="com.wombat.empl.EmployeeRecordHome" name="ejb/EmplRecord" remote="com.wombat.empl.EmployeeRecord" type="Entity"/>
          <Ejb description="Example EJB Reference" home="com.mycompany.mypackage.AccountHome" name="ejb/Account" remote="com.mycompany.mypackage.Account" type="Entity"/>
          <Environment name="maxExemptions" override="true" type="java.lang.Integer" value="15"/>
          <Environment name="foo/name4" override="true" type="java.lang.Integer" value="10"/>
          <Environment name="minExemptions" override="true" type="java.lang.Integer" value="1"/>
          <Environment name="foo/bar/name2" override="true" type="java.lang.Boolean" value="true"/>
          <Environment name="name3" override="true" type="java.lang.Integer" value="1"/>
          <Environment name="foo/name1" override="true" type="java.lang.String" value="value1"/>
          <LocalEjb description="Example Local EJB Reference" home="com.mycompany.mypackage.ProcessOrderHome" local="com.mycompany.mypackage.ProcessOrder" name="ejb/ProcessOrder" type="Session"/>
          <Resource auth="SERVLET" name="jdbc/EmployeeAppDb" scope="Shareable" type="javax.sql.DataSource"/>
          <Resource auth="Container" name="mail/Session" scope="Shareable" type="javax.mail.Session"/>
          <ResourceParams name="jdbc/EmployeeAppDb">
            <parameter>
              <name>password</name>
              <value></value>
            </parameter>
            <parameter>
              <name>url</name>
              <value>jdbc:HypersonicSQL:database</value>
            </parameter>
            <parameter>
              <name>driverClassName</name>
              <value>org.hsql.jdbcDriver</value>
            </parameter>
            <parameter>
              <name>username</name>
              <value>sa</value>
            </parameter>
          </ResourceParams>
          <ResourceParams name="mail/Session">
            <parameter>
              <name>mail.smtp.host</name>
              <value>localhost</value>
            </parameter>
          </ResourceParams>
          <ResourceLink global="simpleValue" name="linkToGlobalResource" type="java.lang.Integer"/>
        </Context>
        <Context className="org.apache.catalina.core.StandardContext" cachingAllowed="true" charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true" crossContext="false" debug="0" docBase="c:/local/tomcat/jakarta-tomcat-4.1.24/webapps/application-examples" mapperClass="org.apache.catalina.core.StandardContextMapper" path="/application-examples" privileged="false" reloadable="false" swallowOutput="false" useNaming="true" wrapperClass="org.apache.catalina.core.StandardWrapper">
        </Context>
        <Context className="org.apache.catalina.core.StandardContext" cachingAllowed="true" charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true" crossContext="false" debug="0" displayName="Tomcat Documentation" docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\tomcat-docs" mapperClass="org.apache.catalina.core.StandardContextMapper" path="/tomcat-docs" privileged="false" reloadable="false" swallowOutput="false" useNaming="true" wrapperClass="org.apache.catalina.core.StandardWrapper">
        </Context>
        <Context className="org.apache.catalina.core.StandardContext" cachingAllowed="true" charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true" crossContext="true" debug="0" displayName="Shyly" docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\shyly" mapperClass="org.apache.catalina.core.StandardContextMapper" path="/shyly" privileged="false" reloadable="true" swallowOutput="false" useNaming="true" wrapperClass="org.apache.catalina.core.StandardWrapper">
        </Context>
        <Context className="org.apache.catalina.core.StandardContext" cachingAllowed="true" charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true" crossContext="false" debug="0" displayName="Tomcat Manager Application" docBase="../server/webapps/manager" mapperClass="org.apache.catalina.core.StandardContextMapper" path="/manager" privileged="true" reloadable="false" swallowOutput="false" useNaming="true" wrapperClass="org.apache.catalina.core.StandardWrapper">
          <ResourceLink global="UserDatabase" name="users" type="org.apache.catalina.UserDatabase"/>
        </Context>
        <Context className="org.apache.catalina.core.StandardContext" cachingAllowed="true" charsetMapperClass="org.apache.catalina.util.CharsetMapper" cookies="true" crossContext="false" debug="0" displayName="Welcome to Tomcat" docBase="C:\local\tomcat\jakarta-tomcat-4.1.24\webapps\ROOT" mapperClass="org.apache.catalina.core.StandardContextMapper" path="" privileged="false" reloadable="false" swallowOutput="false" useNaming="true" wrapperClass="org.apache.catalina.core.StandardWrapper">
        </Context>
        <Logger className="org.apache.catalina.logger.FileLogger" debug="9" directory="logs" prefix="localhost_log." suffix=".txt" timestamp="true" verbosity="4"/>
      </Host>
      <Logger className="org.apache.catalina.logger.FileLogger" debug="0" directory="logs" prefix="catalina_log." suffix=".txt" timestamp="true" verbosity="1"/>
      <Realm className="org.apache.catalina.realm.UserDatabaseRealm" debug="0" resourceName="UserDatabase" validate="true"/>
    </Engine>
  </Service>
</Server>

At 02:12 PM 4/17/2003 -0400, you wrote:

>Hi Shyly,
>
>It looks like you have everything right.
>
>You are not missing an environment variable, assuming you meant
>%CATALINA_HOME% and not %TOMCAT% or %CATALINA_HOM% below.
>
>Do you have the context entry in server.xml inside <host>?
>
>Also do you have the <resource-ref> in the right place in the web.xml file?
>Those entries have to be in the right order.
>
>It has to be after </error-page> and before <security-constraint>.
>
>Can you post (or send directly) you entire server.xml and web.xml files
>after sanitizing them?
>
>Rick


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-user-help@jakarta.apache.org


Re: task : from which directory is the java command started

Posted by Rudolf Nottrott <rn...@alexandria.UCSB.edu>.
Thanks Antoine for the suggestion, but it doesn't seem to make a difference.

Basically, I need to make sure that the command

'java org.hsqldb.Server -database testxyz'

which is run by the ant target below, is started from the current 
directory, the directory in which the database files are.

How can I control the directory in which a  <java task gets started?  Is 
there a parameter for that?

Thanks,
Rudolf


At 08:53 AM 1/20/2004 +0100, you wrote:
>You probably want 2 arguments :
><arg value="-database"/>
><arg value="testxyz"/>
>
>Antoine

>Rudolf Nottrott wrote:
>
>>Hi,
>>
>>I have an Ant <java...> task that runs a database program named 
>>org.hsqldb.Server, see below.  The argument  to the Server is a database 
>>name, "-database testxyz".  The database testxyz.* is supposed to be 
>>taken from (or created in) the directory you were in when you issued the 
>>Java command that started the database Server.
>>Now, I didn't start Java -- I started Ant which started Java.
>>
>>Here is the task:
>><target name="hsqldb" description="Start the HSQLB sample database">
>>    <java classname="org.hsqldb.Server" fork="true" failonerror="true"
>>maxmemory="128m" >
>>       <arg value="-database testxyz"/>
>>       <classpath><pathelement 
>> location="${lib.home}/hsqldb.jar"/></classpath>
>>     </java>
>></target>
>>
>>The server starts up ok, but I'm not getting the database I want, 
>>testxyz, and so I'm trying to verify  the directory from which the Java 
>>command of task <java ...> was issued.
>>Any ideas on how trace this?  Is there perhaps some task like "print 
>>working directory" that I could run in conjunction with the <java ... > task?
>>


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@ant.apache.org
For additional commands, e-mail: user-help@ant.apache.org


off Topic: XSLT debugging with Xerces

Posted by Marc Mueller <ma...@danet.de>.

Sorry for this question in this context,
but does anyone here know how to get the xerces-parser tell more 
debug-infos during XSLT processing ? E.g.  line of error in which input 
document?

Kind regards,
Marc Müller


---------------------------------------------------------------------
To unsubscribe, e-mail: fop-dev-unsubscribe@xml.apache.org
For additional commands, email: fop-dev-help@xml.apache.org


Re: 0.18 release

Posted by Kelly Campbell <ca...@merlot.channelpoint.com>.
A couple more items to make sure to do before building the final dist:

1) Update version property in build.xml (remove "-DEV")
2) Update version in conf/config.xml (actually, I should have ant filter
this file during the build so we only have one place where the version is
kept... so I'll do this tonight)

-Kelly

On Tue, Mar 27, 2001 at 11:29:43PM -0400, Arved Sandstrom wrote:
> At 08:20 PM 3/27/01 -0700, Kelly Campbell wrote:
> >On Tue, Mar 27, 2001 at 01:31:12PM +0200, Fotis Jannidis wrote:
> >> runtests.bat in examples doesn't work. 2 problems: 
> >> 1) ant.jar is missing in the lib directory
> >> 2) the anttask Fop is also missing - should be in buildtools.jar 
> >> which is also missing
> >
> >I guess I didn't think of the runtests scripts needing ant. The Fop
> >anttask has never been in buildtools.jar, it is in the fop.jar which
> >should be in the classpath to run Fop anyway. 
> >
> >So do we want to go ahead and include ant.jar and make the runtests.bat
> >work in the binary distribution? Or should we just make the src
> >distribution the main one that people should download and forego the
> >binary only dist? This would make it more like the previous distributions.
> 
> For simplicity I vote for sticking to the source distro that has everything. 
> At least for a quickie fix to this current situation, and then we can 
> discuss it further. So we would be looking at a FOP 0.18.1, source only, as 
> of Friday or Saturday? Sound reasonable?
> 
> Regards,
> Arved
> 
> Fairly Senior Software Type
> e-plicity (http://www.e-plicity.com)
> Wireless * B2B * J2EE * XML --- Halifax, Nova Scotia
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: fop-dev-unsubscribe@xml.apache.org
> For additional commands, email: fop-dev-help@xml.apache.org

-- 
Kelly A. Campbell              Software Engineer
<ca...@merlotxml.org>           ChannelPoint, Inc.
<ca...@channelpoint.com>        Colorado Springs, Co.

---------------------------------------------------------------------
To unsubscribe, e-mail: fop-dev-unsubscribe@xml.apache.org
For additional commands, email: fop-dev-help@xml.apache.org


Re: 0.18 release

Posted by Fotis Jannidis <fo...@lrz.uni-muenchen.de>.
Arved: 
> For simplicity I vote for sticking to the source distro that has everything. 
> At least for a quickie fix to this current situation, and then we can 
> discuss it further. So we would be looking at a FOP 0.18.1, source only, as 
> of Friday or Saturday? Sound reasonable?

+1

kelly, thanks for fixing the makedoc problem; I also tried it with an 
xalan 1 version, but it didn't work, probably the wrong one.

Fotis


---------------------------------------------------------------------
To unsubscribe, e-mail: fop-dev-unsubscribe@xml.apache.org
For additional commands, email: fop-dev-help@xml.apache.org


Re: 0.18 release

Posted by Arved Sandstrom <Ar...@chebucto.ns.ca>.
At 08:20 PM 3/27/01 -0700, Kelly Campbell wrote:
>On Tue, Mar 27, 2001 at 01:31:12PM +0200, Fotis Jannidis wrote:
>> runtests.bat in examples doesn't work. 2 problems: 
>> 1) ant.jar is missing in the lib directory
>> 2) the anttask Fop is also missing - should be in buildtools.jar 
>> which is also missing
>
>I guess I didn't think of the runtests scripts needing ant. The Fop
>anttask has never been in buildtools.jar, it is in the fop.jar which
>should be in the classpath to run Fop anyway. 
>
>So do we want to go ahead and include ant.jar and make the runtests.bat
>work in the binary distribution? Or should we just make the src
>distribution the main one that people should download and forego the
>binary only dist? This would make it more like the previous distributions.

For simplicity I vote for sticking to the source distro that has everything. 
At least for a quickie fix to this current situation, and then we can 
discuss it further. So we would be looking at a FOP 0.18.1, source only, as 
of Friday or Saturday? Sound reasonable?

Regards,
Arved

Fairly Senior Software Type
e-plicity (http://www.e-plicity.com)
Wireless * B2B * J2EE * XML --- Halifax, Nova Scotia


---------------------------------------------------------------------
To unsubscribe, e-mail: fop-dev-unsubscribe@xml.apache.org
For additional commands, email: fop-dev-help@xml.apache.org


RE: Sa -- lint : HOWTO know which cf file gives the problem ?

Posted by Florent Gilain <fl...@direct-energie.com>.
Hummm thanks a lot, it was finally easyer than i was thinking  ;-))

Florent

-----Message d'origine-----
De : Matthias Fuhrmann [mailto:Matthias.Fuhrmann@stud.uni-hannover.de] 
Envoyé : jeudi 25 janvier 2007 21:27
À : users@spamassassin.apache.org
Objet : Re: Sa -- lint : HOWTO know which cf file gives the problem ?

On Thu, 25 Jan 2007, Florent Gilain wrote:

hI,

> Hello all,
>
> When i run this :
>
> [root@mx1 spamassassin]# spamassassin --lint [21570] warn: config: 
> warning: description exists for non-existent rule MIME_BOUND_NEXTPART 
> [21570] warn: config: warning: description exists for non-existent 
> rule BIZ_TLD [21570] warn: lint: 2 issues detected, please rerun with 
> debug enabled for more information
>
> I am asking myself how to know which *.cf file is the problem...is 
> there an easy way to find it ?

either in /etc/mail/spamassasin or in $PREFIX/share/spamassassin do for
example this: 'grep RULENAME *.cf'
if you were using sa-update you can find those updated main rules in
$PREFIX/var/spamassassin/3.001007/updates_spamassassin_org
this is for 3.1.7, your path might be:
$PREFIX/var/spamassassin/3.001001/updates_spamassassin_org

result is something like:

grep ZMIde_SUBBIG *.cf
70_zmi_german.cf:header   ZMIde_SUBBIG Subject =~ /(?:Eilig
70_zmi_german.cf:describe ZMIde_SUBBIG subject suggesting business
70_zmi_german.cf:score    ZMIde_SUBBIG 1.8

so the file containing the rule is 70_zmi_german.cf in the current
directory.

regards,
Matthias


Re: Apache::Session::MySQL, light/heavy proxy, wedging

Posted by Todd Finney <tf...@boygenius.com>.
At 08:57 PM 1/30/2007 -0500, Perrin Harkins wrote:
>Before I spend too much time analyzing your symptoms, are you sure
>that your application requires excusive locks on sessions?  If not,
>you can use Apache::Session::Lock::Null for your locking class.

Eminently reasonable.

>The difference is that without exclusive locks
>you can get lost updates if a user tries to modify a session from two
>separate requests simultaneously.  (Not usually an issue, but it can
>be for certain kinds of applications.)

The sessions are modified on every request, to set a last_access time, and 
they're modified on login to set an authentication token.  I can't think of 
circumstances under which two different requests would attempt to modify a 
given session at the same time.

As much as I'd really like to understand what's actually happening here, 
I'll switch to A::S::Lock::Null if you think that's the best bet.  I don't 
see an example in the Apache::Session docs for switching the locking class, 
though - may I have a pointer?

thanks!



Re: Storing images in SQL BLOBs

Posted by am...@amos.mailshell.com.
On Wed, 2005-09-07 at 14:16 +1000, Murray Collingwood wrote:
> Thanks Jason
> 
> I'm having a strange issue with serving up these images.  I'm getting a "socket write 
> error" from the following code.  There are 3 images, the details follow the code.  The 
> first two images appear, the third fails to appear.

>>From the error it looks like the browser has decided to close
the connection in the middle for some reason. It could be for
various reasons - like timeout or a bug or it decides that it
can't handle the file type.
You should accommodate for this error anyway because even when
everything is dandy the network or the client may go down at
any stage.

What do you see on the browser's side?
Can you try using wget/curl instead of a web browser?

Cheers,

--Amos


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@struts.apache.org
For additional commands, e-mail: user-help@struts.apache.org


Re: Storing images in SQL BLOBs

Posted by Murray Collingwood <mu...@focus-computing.com.au>.
Thanks Jason

I'm having a strange issue with serving up these images.  I'm getting a "socket write 
error" from the following code.  There are 3 images, the details follow the code.  The 
first two images appear, the third fails to appear.

                response.setContentLength((int) f.length());
                response.setContentType("application/x-file-download");
                response.setHeader("Content-disposition", "attachment; filename=" + name );
                System.err.println(">> " + response.toString());
                FileInputStream fis = new FileInputStream(f);
                ServletOutputStream sos = response.getOutputStream();
                byte[] buffer = new byte[32768];
                int n = 0;
                int x = 0;
                while ((n = fis.read(buffer)) != -1) {
                    System.err.println(">> x = " + x++ + " n = " + n);
                    sos.write(buffer, 0, n);
                }
                fis.close();
                sos.flush();
        } catch (Exception e) {
            System.err.println(">> Error serving image: " + request.getParameter("local"));
            e.printStackTrace();
        }

Image 1 bytes: 7734
Image 2 bytes: 79279
Image 3 bytes: 2871052 (image called "2_another quite night on tour.tif")


The generated log file:
----------------------------
>> org.netbeans.modules.web.monitor.server.MonitorResponseWrapper@8c3eb8
>> x = 0 n = 7734
>> org.netbeans.modules.web.monitor.server.MonitorResponseWrapper@4f2189
>> x = 0 n = 32768
>> x = 1 n = 32768
>> org.netbeans.modules.web.monitor.server.MonitorResponseWrapper@e7cb66
>> x = 0 n = 32768
>> x = 1 n = 32768
>> x = 2 n = 32768
>> x = 3 n = 32768
>> x = 4 n = 32768
>> x = 5 n = 32768
>> x = 6 n = 32768
>> x = 7 n = 32768
>> x = 8 n = 32768
>> x = 9 n = 32768
>> x = 2 n = 13743
>> x = 10 n = 32768
>> Error serving image: 2_another quite night on tour.tif
ClientAbortException:  java.net.SocketException: Connection reset by peer: socket 
write error
        at 
org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:366)
        at org.apache.tomcat.util.buf.ByteChunk.flushBuffer(ByteChunk.java:403)
        at org.apache.tomcat.util.buf.ByteChunk.append(ByteChunk.java:323)
        at org.apache.catalina.connector.OutputBuffer.writeBytes(OutputBuffer.java:392)
        at org.apache.catalina.connector.OutputBuffer.write(OutputBuffer.java:381)
        at 
org.apache.catalina.connector.CoyoteOutputStream.write(CoyoteOutputStream.java:76
)
        at com.bpx.website.controller.action.Image.execute(Image.java:54)
        at 
org.apache.struts.action.RequestProcessor.processActionPerform(RequestProcessor.ja
va:419)


Appreciate any help.
Kind regards
mc



On 7 Sep 2005 at 15:33, Jason Lea wrote:

> Oh right, you need to discover the location automatically.
> 
> Something like this might work:
> 
> request.getSession().getServletContext().getRealPath("/images");
> 
> That should give you the full path to /images.
> 
> 
> 
> Murray Collingwood wrote:
> 
> >How do I find the path actual path to "/images" assuming of course that "/images" is 
a 
> >url reference?
> >
> >Is there something like $STRUTS_ROOT?
> >
> >Kind regards
> >mc
> >
> >
> >On 7 Sep 2005 at 14:50, Jason Lea wrote:
> >
> >  
> >
> >>I guess it depends if you are trying to restrict access to these images.
> >>
> >>To make all images available for anyone, put them into /images, for example.
> >>
> >>If you want to not allow any direct access to them, you could create a 
> >>directory under /WEB-INF and put them there.  You would then have to 
> >>create something to serve the images up to browser.
> >>
> >>Or set up container managed security, put them into /auth/images, then 
> >>put a security constraint on /auth/images so that only users with the 
> >>required role is allowed to view the images.
> >>
> >>
> >>Murray Collingwood wrote:
> >>
> >>    
> >>
> >>>Hi all (Gosh, I'm starting to feel like a regular on this list...)
> >>>
> >>>After my experiences below I have rewritten my application to store the images in 
a 
> >>>local sub-directory, however when I ran this new version the sub-directory was 
> >>>      
> >>>
> >created 
> >  
> >
> >>>under the Tomcat/bin directory - not really appropriate.
> >>>
> >>>Should I be trying to reference my application directory and store the images 
under 
> >>>      
> >>>
> >my 
> >  
> >
> >>>"/WEB-INF" directory?
> >>>
> >>>Do I have to setup a special directory on the server that can be referenced by 
> >>>      
> >>>
> >Tomcat 
> >  
> >
> >>>to serve the images directly?
> >>>
> >>>What are others doing?
> >>>
> >>>Also, if you are referencing this directory in your application, is it hard-coded or do 
> >>>      
> >>>
> >you 
> >  
> >
> >>>have an entry in your 'context.xml'?  What sort of entry do you use?
> >>>
> >>>Kind regards
> >>>mc
> >>> 
> >>>
> >>>      
> >>>
> >>-- 
> >>Jason Lea
> >>
> >>
> >>
> >>---------------------------------------------------------------------
> >>To unsubscribe, e-mail: user-unsubscribe@struts.apache.org
> >>For additional commands, e-mail: user-help@struts.apache.org
> >>
> >>
> >>
> >>-- 
> >>No virus found in this incoming message.
> >>Checked by AVG Anti-Virus.
> >>Version: 7.0.344 / Virus Database: 267.10.18/91 - Release Date: 6/09/2005
> >>
> >>    
> >>
> >
> >
> >
> >FOCUS Computing
> >Mob: 0415 24 26 24
> >murray@focus-computing.com.au
> >http://www.focus-computing.com.au
> >
> >
> >
> >  
> >
> 
> -- 
> Jason Lea
> 
> 
> 



FOCUS Computing
Mob: 0415 24 26 24
murray@focus-computing.com.au
http://www.focus-computing.com.au



-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.344 / Virus Database: 267.10.18/91 - Release Date: 6/09/2005


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@struts.apache.org
For additional commands, e-mail: user-help@struts.apache.org


Re: Storing images in SQL BLOBs

Posted by Jason Lea <ja...@kumachan.net.nz>.
Oh right, you need to discover the location automatically.

Something like this might work:

request.getSession().getServletContext().getRealPath("/images");

That should give you the full path to /images.



Murray Collingwood wrote:

>How do I find the path actual path to "/images" assuming of course that "/images" is a 
>url reference?
>
>Is there something like $STRUTS_ROOT?
>
>Kind regards
>mc
>
>
>On 7 Sep 2005 at 14:50, Jason Lea wrote:
>
>  
>
>>I guess it depends if you are trying to restrict access to these images.
>>
>>To make all images available for anyone, put them into /images, for example.
>>
>>If you want to not allow any direct access to them, you could create a 
>>directory under /WEB-INF and put them there.  You would then have to 
>>create something to serve the images up to browser.
>>
>>Or set up container managed security, put them into /auth/images, then 
>>put a security constraint on /auth/images so that only users with the 
>>required role is allowed to view the images.
>>
>>
>>Murray Collingwood wrote:
>>
>>    
>>
>>>Hi all (Gosh, I'm starting to feel like a regular on this list...)
>>>
>>>After my experiences below I have rewritten my application to store the images in a 
>>>local sub-directory, however when I ran this new version the sub-directory was 
>>>      
>>>
>created 
>  
>
>>>under the Tomcat/bin directory - not really appropriate.
>>>
>>>Should I be trying to reference my application directory and store the images under 
>>>      
>>>
>my 
>  
>
>>>"/WEB-INF" directory?
>>>
>>>Do I have to setup a special directory on the server that can be referenced by 
>>>      
>>>
>Tomcat 
>  
>
>>>to serve the images directly?
>>>
>>>What are others doing?
>>>
>>>Also, if you are referencing this directory in your application, is it hard-coded or do 
>>>      
>>>
>you 
>  
>
>>>have an entry in your 'context.xml'?  What sort of entry do you use?
>>>
>>>Kind regards
>>>mc
>>> 
>>>
>>>      
>>>
>>-- 
>>Jason Lea
>>
>>
>>
>>---------------------------------------------------------------------
>>To unsubscribe, e-mail: user-unsubscribe@struts.apache.org
>>For additional commands, e-mail: user-help@struts.apache.org
>>
>>
>>
>>-- 
>>No virus found in this incoming message.
>>Checked by AVG Anti-Virus.
>>Version: 7.0.344 / Virus Database: 267.10.18/91 - Release Date: 6/09/2005
>>
>>    
>>
>
>
>
>FOCUS Computing
>Mob: 0415 24 26 24
>murray@focus-computing.com.au
>http://www.focus-computing.com.au
>
>
>
>  
>

-- 
Jason Lea



Re: Using custom tags and struts together

Posted by Dave Newton <ne...@pingsite.com>.
Murray Collingwood wrote:

>I could use a <logic:greaterThan...> tag to test the value of the security number and 
>then display the link if appropriate.  This gets more complicated when the security 
>number 3 is used, as then I have to test values on the record as well, suddenly I have 
>nested <logic:..> tags and the page is starting to look a lot more complicated than it 
>should.  The other problem with this approach is: "whether a user can update a record 
>or not" is a business logic decision and shouldn't be in the interface.
>
>Now, to complicate it even further, security number 2 is allowed to update records but 
>uses a different form, in other words, they are only allowed to update certain values on 
>the record.  Hence I have to vary the link generated in some instances.
>  
>
If the pages are radically different then perhaps you could use 
completely different presentations based on a user's access 
level--rather than passing a different result set to the page, 
generating different links, etc. you could just send them to an entirely 
different JSP.

I've found that once my page logic goes beyond a simple "show this chunk 
or don't" it's been easier (not always cleaner, I suppose) to have 
completely separate pages altogether. That said, I've only rarely had to 
go that far: usually passing a different collection in to the JSP, 
setting a true/false to choose between two links, etc. has been enough.

I'm interested in how other folks have dealt with similar issues.

Dave



---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@struts.apache.org
For additional commands, e-mail: user-help@struts.apache.org


Re: mason book

Posted by Thomas Eibner <th...@stderr.net>.
On Thu, Oct 24, 2002 at 10:13:58PM +0200, Per Einar Ellefsen wrote:
> At 22:03 24.10.2002, allan wrote:
> >why not the eagle book [ it's the last ] ?
> 
> Yeah, but also the most important one. But it's probably more fair to have 
> the last one removed.

I think the Eagle is the most important one too, but how about some
kind of cycling mechanism so all books get exposure?

-- 
  Thomas Eibner <http://thomas.eibner.dk/> DnsZone <http://dnszone.org/>
  mod_pointer <http://stderr.net/mod_pointer> <http://photos.eibner.dk/>
  !(C)<http://copywrong.dk/>                  <http://apachegallery.dk/>
          Putting the HEST in .COM <http://www.hestdesign.com/>

---------------------------------------------------------------------
To unsubscribe, e-mail: docs-dev-unsubscribe@perl.apache.org
For additional commands, e-mail: docs-dev-help@perl.apache.org


Aggregate and Result Table mismatch

Posted by he...@cenix-bioscience.com.
Hello,

this might be more a bug report than a question, but nonetheless I will go on:
For my testplan I got very strong peak for one 'HTTP Request' in 'Aggregate
Report' (max load time). But in 'View Results in Table' I see this one peak with
the same load time but related to another(!) 'HTTP Request'???!!!
(better: I have named all requests and the names does not match)

Do I understand anything wrong?
Thanks,
Michael

--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: [PATCH][svnmerge] Fix reporting message for 'avail --blocked'

Posted by David James <dj...@collab.net>.
On 4/18/06, Madan U Sreenivasan <ma...@collab.net> wrote:
> [[[
> Fix verbose message for `avail --blocked'.
>
> * contrib/client-side/svnmerge.py
>     (action_avail): Modified to use a different report() message, when
>     `svnmerge avail' is invoked with the `--blocked' option.
> ]]]

Madan, your patch looks good. +1 to commit.

Cheers,

David

-- 
David James -- http://www.cs.toronto.edu/~james

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: [PATCH] Factorise 'svn lock' validation code

Posted by Michael W Thelen <mi...@pietdepsi.com>.
Michael W Thelen wrote:
> Madan U Sreenivasan wrote:
>> Aye! Aye! captain!, pl. find the patch attached. :)
>>
>> [[[
>> Factorize out code for locking a path and validating
>> the contents of the lock.
>>
>> Suggested by: djames
>> Review by: lundblad
> 
> Thanks for the patch... if no one comments on it within a few days, I'll
> file an issue for it in the tracker.

I've filed it as issue #2569:
http://subversion.tigris.org/issues/show_bug.cgi?id=2569

-- 
Michael W Thelen
It is a mistake to think you can solve any major problems just with
potatoes.       -- Douglas Adams

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: [PATCH] Factorise 'svn lock' validation code

Posted by Michael W Thelen <mi...@pietdepsi.com>.
Madan U Sreenivasan wrote:
> Aye! Aye! captain!, pl. find the patch attached. :)
> 
> [[[
> Factorize out code for locking a path and validating
> the contents of the lock.
> 
> Suggested by: djames
> Review by: lundblad

Thanks for the patch... if no one comments on it within a few days, I'll
file an issue for it in the tracker.

-- 
Michael W Thelen
It is a mistake to think you can solve any major problems just with
potatoes.       -- Douglas Adams

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: [PATCH] Factorise 'svn lock' validation code

Posted by Garrett Rooney <ro...@electricjellyfish.net>.
On 6/14/06, Lieven Govaerts <lg...@mobsol.be> wrote:

> this commit seems to make the test fail on the Win32/XP/VS2005 buildslave. On
> Mac & Linux buildslaves it looks ok.
>
> More info (tests.log) here:
> http://www.mobsol.be/buildbot/win32-xp%20VS2005/builds/81/step-Test%20fsfs%2Bra_local/0
>
>
> I don't have the time to look into it right now, if needed I can look into the
> problem this evening.

I'd appreciate it if you could take a look (perhaps by trying the
patch Madan just posted), while I do have the ability to build on
windows now I have not yet managed to get the tests up and running...

Thanks,

-garrett

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

RE: [PATCH] Factorise 'svn lock' validation code

Posted by Lieven Govaerts <lg...@mobsol.be>.
Committed in r20100.

Lieven. 

> -----Original Message-----
> From: rooneg@gmail.com [mailto:rooneg@gmail.com] On Behalf Of 
> Garrett Rooney
> Sent: woensdag 14 juni 2006 20:21
> To: Lieven Govaerts
> Cc: Madan U Sreenivasan; Peter N. Lundblad; dev@subversion.tigris.org
> Subject: Re: [PATCH] Factorise 'svn lock' validation code
> 
> On 6/14/06, Lieven Govaerts <lg...@mobsol.be> wrote:
> > Madan,
> >
> > I tried your patch but it doesn't work.
> >
> > The actual problem is cause by these two lines of code:
> >
> >   comment = "Locking path:%s." % path
> > [..]
> >   comment_re = re.compile (".*?%s\n.*?" % re.escape(comment), 
> > re.DOTALL)
> >
> > The code does a regex match on the text(expanded): '.*Locking Path:
> > /path/to/file.*'. Problem is that on Windows, the path 
> separator is a 
> > backslash, which should be escaped in regular expressions.
> >
> > Attached patch will do just that, so unless you see any new 
> problems 
> > I'll commit it.
> 
> Looks fine to me, if it works on windows commit it!
> 
> -garrett
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
> For additional commands, e-mail: dev-help@subversion.tigris.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: [PATCH] Factorise 'svn lock' validation code

Posted by Garrett Rooney <ro...@electricjellyfish.net>.
On 6/14/06, Lieven Govaerts <lg...@mobsol.be> wrote:
> Madan,
>
> I tried your patch but it doesn't work.
>
> The actual problem is cause by these two lines of code:
>
>   comment = "Locking path:%s." % path
> [..]
>   comment_re = re.compile (".*?%s\n.*?" % re.escape(comment), re.DOTALL)
>
> The code does a regex match on the text(expanded): '.*Locking Path:
> /path/to/file.*'. Problem is that on Windows, the path separator is a
> backslash, which should be escaped in regular expressions.
>
> Attached patch will do just that, so unless you see any new problems I'll
> commit it.

Looks fine to me, if it works on windows commit it!

-garrett

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

RE: [PATCH] Factorise 'svn lock' validation code

Posted by Lieven Govaerts <lg...@mobsol.be>.
Madan,

I tried your patch but it doesn't work. 

The actual problem is cause by these two lines of code:

  comment = "Locking path:%s." % path
[..]
  comment_re = re.compile (".*?%s\n.*?" % re.escape(comment), re.DOTALL)

The code does a regex match on the text(expanded): '.*Locking Path:
/path/to/file.*'. Problem is that on Windows, the path separator is a
backslash, which should be escaped in regular expressions.

Attached patch will do just that, so unless you see any new problems I'll
commit it.

Lieven.


> -----Original Message-----
> From: Madan U Sreenivasan [mailto:madan@collab.net] 
> Sent: woensdag 14 juni 2006 17:58
> To: Lieven Govaerts; Garrett Rooney
> Cc: Peter N. Lundblad; dev@subversion.tigris.org
> Subject: Re: [PATCH] Factorise 'svn lock' validation code
> 
> On Wed, 14 Jun 2006 14:53:39 +0530, Lieven Govaerts 
> <lg...@mobsol.be> wrote:
> 
> > Quoting Garrett Rooney <ro...@electricjellyfish.net>:
> [snip]
> >> Committed in r20086.  Thanks,
> >>
> >> -garrett
> >
> > Garrett,
> >
> >
> > this commit seems to make the test fail on the Win32/XP/VS2005 
> > buildslave. On Mac & Linux buildslaves it looks ok.
> 
> Could this be because the regex for comparing output includes 
> \n and in windows a newline is effectively \r\n?
> 
> I reviewed the code, and this is the only thing I could 
> suspect. Sorry, I dont have a windows box to test it right now...
> 
> Could somebody with a windows box, apply the attached patch, 
> test and see?
> 
> Regards,
> Madan.

Re: [PATCH] Factorise 'svn lock' validation code

Posted by Madan U Sreenivasan <ma...@collab.net>.
On Wed, 14 Jun 2006 14:53:39 +0530, Lieven Govaerts <lg...@mobsol.be> wrote:

> Quoting Garrett Rooney <ro...@electricjellyfish.net>:
[snip]
>> Committed in r20086.  Thanks,
>>
>> -garrett
>
> Garrett,
>
>
> this commit seems to make the test fail on the Win32/XP/VS2005  
> buildslave. On
> Mac & Linux buildslaves it looks ok.

Could this be because the regex for comparing output includes \n and in  
windows a newline is effectively \r\n?

I reviewed the code, and this is the only thing I could suspect. Sorry, I  
dont have a windows box to test it right now...

Could somebody with a windows box, apply the attached patch, test and see?

Regards,
Madan.

Re: [PATCH] Factorise 'svn lock' validation code

Posted by Lieven Govaerts <lg...@mobsol.be>.
Quoting Garrett Rooney <ro...@electricjellyfish.net>:

> On 5/24/06, Madan U Sreenivasan <ma...@collab.net> wrote:
> > On Wed, 24 May 2006 14:46:42 +0530, Peter N. Lundblad
> > <pe...@famlundblad.se> wrote:
> >
> > > Madan U Sreenivasan writes:
> > [snip]
> > >  > +  lock_info = output[-6:-1]
> > >
> > > While here, we could as well make this not break if we add more fields
> > > to the info output in the future.  I think we should search for the
> > > specific
> > > fields, which is much easier with this refactorization.
> >
> > Aye! Aye! captain!, pl. find the patch attached. :)
> >
> > [[[
> > Factorize out code for locking a path and validating
> > the contents of the lock.
> >
> > Suggested by: djames
> > Review by: lundblad
> >
> > * subversion/tests/cmdline/svntest/actions.py
> >    (run_and_validate_lock): New function to lock a path, and
> >     validate the contents of the lock.
> >
> > * subversion/tests/cmdline/lock_tests.py
> >    (examine_lock, examine_lock_via_url, examine_lock_encoded_recurse):
> >     Modified to use svntest.actions.run_and_validate_lock().
> > ]]]
>
> Committed in r20086.  Thanks,
>
> -garrett

Garrett,


this commit seems to make the test fail on the Win32/XP/VS2005 buildslave. On
Mac & Linux buildslaves it looks ok.

More info (tests.log) here:
http://www.mobsol.be/buildbot/win32-xp%20VS2005/builds/81/step-Test%20fsfs%2Bra_local/0


I don't have the time to look into it right now, if needed I can look into the
problem this evening.

Lieven.


----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: [PATCH] Factorise 'svn lock' validation code

Posted by Madan U Sreenivasan <ma...@collab.net>.
On Wed, 14 Jun 2006 01:10:06 +0530, Garrett Rooney  
<ro...@electricjellyfish.net> wrote:

> On 5/24/06, Madan U Sreenivasan <ma...@collab.net> wrote:
>> On Wed, 24 May 2006 14:46:42 +0530, Peter N. Lundblad
>> <pe...@famlundblad.se> wrote:
>>
>> > Madan U Sreenivasan writes:
>> [snip]
>> >  > +  lock_info = output[-6:-1]
>> >
>> > While here, we could as well make this not break if we add more fields
>> > to the info output in the future.  I think we should search for the
>> > specific
>> > fields, which is much easier with this refactorization.
>>
>> Aye! Aye! captain!, pl. find the patch attached. :)
>>
>> [[[
>> Factorize out code for locking a path and validating
>> the contents of the lock.
>>
>> Suggested by: djames
>> Review by: lundblad
>>
>> * subversion/tests/cmdline/svntest/actions.py
>>    (run_and_validate_lock): New function to lock a path, and
>>     validate the contents of the lock.
>>
>> * subversion/tests/cmdline/lock_tests.py
>>    (examine_lock, examine_lock_via_url, examine_lock_encoded_recurse):
>>     Modified to use svntest.actions.run_and_validate_lock().
>> ]]]
>
> Committed in r20086.  Thanks,

Thank you.


Regards,
Madan.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: [PATCH] Factorise 'svn lock' validation code

Posted by Garrett Rooney <ro...@electricjellyfish.net>.
On 5/24/06, Madan U Sreenivasan <ma...@collab.net> wrote:
> On Wed, 24 May 2006 14:46:42 +0530, Peter N. Lundblad
> <pe...@famlundblad.se> wrote:
>
> > Madan U Sreenivasan writes:
> [snip]
> >  > +  lock_info = output[-6:-1]
> >
> > While here, we could as well make this not break if we add more fields
> > to the info output in the future.  I think we should search for the
> > specific
> > fields, which is much easier with this refactorization.
>
> Aye! Aye! captain!, pl. find the patch attached. :)
>
> [[[
> Factorize out code for locking a path and validating
> the contents of the lock.
>
> Suggested by: djames
> Review by: lundblad
>
> * subversion/tests/cmdline/svntest/actions.py
>    (run_and_validate_lock): New function to lock a path, and
>     validate the contents of the lock.
>
> * subversion/tests/cmdline/lock_tests.py
>    (examine_lock, examine_lock_via_url, examine_lock_encoded_recurse):
>     Modified to use svntest.actions.run_and_validate_lock().
> ]]]

Committed in r20086.  Thanks,

-garrett

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: [PATCH] Factorise 'svn lock' validation code

Posted by Madan U Sreenivasan <ma...@collab.net>.
On Wed, 24 May 2006 14:46:42 +0530, Peter N. Lundblad  
<pe...@famlundblad.se> wrote:

> Madan U Sreenivasan writes:
[snip]
>  > +  lock_info = output[-6:-1]
>
> While here, we could as well make this not break if we add more fields
> to the info output in the future.  I think we should search for the  
> specific
> fields, which is much easier with this refactorization.

Aye! Aye! captain!, pl. find the patch attached. :)

[[[
Factorize out code for locking a path and validating
the contents of the lock.

Suggested by: djames
Review by: lundblad

* subversion/tests/cmdline/svntest/actions.py
   (run_and_validate_lock): New function to lock a path, and
    validate the contents of the lock.

* subversion/tests/cmdline/lock_tests.py
   (examine_lock, examine_lock_via_url, examine_lock_encoded_recurse):
    Modified to use svntest.actions.run_and_validate_lock().
]]]


Regards,
Madan.

Re: [PATCH] Factorise 'svn lock' validation code

Posted by "Peter N. Lundblad" <pe...@famlundblad.se>.
Madan U Sreenivasan writes:
 > 
 > +def run_and_validate_lock(path, username, password):
 > +  """`svn lock' the given path and validate the contents of the lock.
 > +     Use the given username. This is important because locks are
 > +     user specific."""
 >  
 > +  comment = "Locking path:%s." % path
 > +
 > +  # lock the path
 > +  run_and_verify_svn(None, ".*locked by user", [], 'lock',
 > +                     '--username', username,
 > +                     '--password', password,
 > +                     '-m', comment, path)
 > +
 > +  # Run info and check that we get the lock fields.
 > +  output, err = run_and_verify_svn(None, None, [],
 > +                                   'info','-R', 
 > +                                   path)
 > +
 > +  lock_info = output[-6:-1]

While here, we could as well make this not break if we add more fields
to the info output in the future.  I think we should search for the specific
fields, which is much easier with this refactorization.

Regards,
//Peter

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: svn diff on renamed files

Posted by Chris Hecker <ch...@d6.com>.
>:-) Can you send just the parts that aren't covered in 1093?  Either
>one person filters (you), or everyone else filters...

Actually, I was kinda being hyperbolic, the mail's not that long and most 
of it is valuable, if I do say so myself. :) Here's the [shortened] 
suggestions part, if you want the rationale for any of them, read the whole 
mail, which I've conveniently put at the bottom of this message:

>1.  Fixing diff so it tracks renames on the file, so -r2:7 new.txt works 
>like you'd expect (meaning you didn't have to know the file was renamed 6 
>times before you joined the company 18000 versions ago). <snip>
>
>2.  Making a symbol for URL-to-the-current-wc-directory (like the symbols 
>for HEAD, PREV, etc.), so I can just say REP_URL/foo.txt (or whatever) to 
>specify the full path.  This would just get the Url: from info and use it, 
>nothing fancy, just a shorthand.  <snip>
>
>3.  If there's an @ symbol on a non-full-url'd file name, still look for 
>it at that revision number in the repository.  In other words, old.txt is 
>not in my wc, but old.txt@3 makes total sense contextually, even though 
>there's no old.txt in my wc.  This would save a lot of typing.

Chris



-------------
>Date: Thu, 26 Jun 2003 00:20:18 -0700
>To: dev@subversion.tigris.org
>From: Chris Hecker <ch...@d6.com>
>Subject: svn diff on renamed files
>
>
>
>Like an idiot, I typed up a huge email before looking at the issues 
>list.  Anyway, 1093 covers a fair bit of this, but it seems like there are 
>some issues and ideas mentioned in this mail that I didn't see in the bug 
>report, so I figured it has some value and I'd send it instead of deleting it.
>
>
>----------
>
>
>Hopefully I'm not making some total newbie mistake here, although if I am 
>that's good because it means what I want is possible.  :)
>
>
>I have a file that has a number of revisions as old.txt, and then it gets 
>renamed to new.txt and gets some more revisions.  I've got an up-to-date 
>repository, so I have new.txt in my directory.  I want to diff two earlier 
>versions of the file, before the rename, or do a diff across the 
>rename.  It looks like I have to use an absolute URL to do this?
>
>
>That's really, uh, tedious.  First, it means you have to know exactly when 
>the file got renamed or you get bogus diffs, since svn diff assumes an 
>empty file if you specify a revision before the file existed (in other 
>words, old.txt renamed to new.txt at r5, then svn diff -r2:7 new.txt gives 
>the complete new.txt at r7, not the diff between foo@2 and 
>bar@7).  Knowing means scrolling through a log of the file's entire 
>history looking for the A ... (from ...) line, and there might be multiple 
>ones, and you have to keep track of which versions were which names to do 
>a correct diff.  Second, it means you have to use the full URL even if 
>you've got an up to date working copy and you want to compare the current 
>wc file against a previous version, and so you need to know it, so you 
>need to run svn info and figure it out, etc.  diff also seems to not want 
>to diff urls and wc files, since it prints out a TBD warning, meaning it's 
>currently impossible to diff a pre-rename version of a file against the wc 
>(without cat'ing the old file to a temp), unless I'm missing something.
>
>
>I suggest three things (that aren't that well thought out, but here goes 
>anyway):
>
>
>1.  Fixing diff so it tracks renames on the file, so -r2:7 new.txt works 
>like you'd expect (meaning you didn't have to know the file was renamed 6 
>times before you joined the company 18000 versions ago).  Log already sort 
>of works this way (although not completely, because you can't svn log 
>-r3:4 new.txt if it didn't exist before r4, but it works if you just svn 
>log new.txt), in fact you have to use svn log -v file.txt to even figure 
>out the file was renamed as far as I can tell.
>
>
>2.  Making a symbol for URL-to-the-current-wc-directory (like the symbols 
>for HEAD, PREV, etc.), so I can just say REP_URL/foo.txt (or whatever) to 
>specify the full path.  This would just get the Url: from info and use it, 
>nothing fancy, just a shorthand.  You should also be able to do 
>REP_URL/../../blah/other.txt and have it work too, although that's less 
>important (and there's some ambiguity about whether you mean the Url from 
>this directory or the blah directory).  This, incidentally, would also 
>help clean up the svn:externals "modules" url problem from the FAQ, since 
>it would just do the right thing for your current directory regardless of 
>which repository it's from.
>
>
>3.  If there's an @ symbol on a non-full-url'd file name, still look for 
>it at that revision number in the repository.  In other words, old.txt is 
>not in my wc, but old.txt@3 makes total sense contextually, even though 
>there's no old.txt in my wc.  This would save a lot of typing.
>
>
>Chris
>
>
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
>For additional commands, e-mail: dev-help@subversion.tigris.org


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Timestamp Frustrations

Posted by tr...@clayst.com.
On 3 Jun 2005 James Berry wrote:

> Sounds to me like maybe you just want rsync.

I might, if this were a Linux environment.  But it's Windows.  I see 
that one can use rsync on Windows via Cygwin but it appears to be 
fairly involved to set up.

--
Tom




---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Timestamp Frustrations

Posted by James Berry <ja...@jberry.us>.
On Jun 3, 2005, at 6:52 AM, trlists@clayst.com wrote:

> Any other ideas? :-)

Sounds to me like maybe you just want rsync.

-jdb

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Support for Pimoroni Tiny2040

Posted by Peter Kalbus <pt...@mailbox.org.INVALID>.
Hi Alan,

thanks for feedback.

Support will go into 'boards/arm/rp2040/pimoroni-tiny2040‘.

I’ll create PR once it’s done.

/Piet


> Am 07.01.2022 um 15:50 schrieb Alan Carvalho de Assis <ac...@gmail.com>:
> 
> Hi Peter,
> 
> You can use RaspberryPico board, but I think it is a good idea to
> create a new directory to Pimoroni Tiny2040 on NuttX because the
> RaspberryPico could have different hardware features and different
> pins usage.
> 
> So, if you create a new directory entry to Tiny2040 you can submit it
> to mainline and let more people to use it.
> 
> BR,
> 
> Alan
> 
> On 1/7/22, Kalbus, Peter <pt...@mailbox.org.invalid> wrote:
>> Corrected subject …
>> 
>> /Piet
>> 
>> 
>>> Am 07.01.2022 um 14:55 schrieb Kalbus, Peter <pt...@mailbox.org.invalid>:
>>> 
>>> Hi,
>>> 
>>> I owned a Pimoroni Tiny2040 device.
>>> It’s an RP2040 based device with some interesting features:
>>> 
>>> + small form factor (18x21.3mm)
>>> + 8 MByte Flash
>>> + RGB LED
>>> + Reset Button
>>> + USB-C
>>> - reduced pin-header
>>> 
>>> I’d like to add the device based on the available RP2040 support.
>>> It’s basically working, but the specifics are missing.
>>> 
>>> 
>>> My plan is to re-use the code from
>>> 
>>> 'boards/arm/rp2040/raspberrypi-pico‘ and
>>> 
>>> and add the support in
>>> 
>>> 'boards/arm/rp2040/pimoroni-tiny2040‘.
>>> 
>>> 
>>> Anything wrong with this approach?
>>> Someone already working on this device?
>>> 
>>> 
>>> Regards
>>> Piet
>>> 
>> 


Re: Support for Pimoroni Tiny2040

Posted by Alan Carvalho de Assis <ac...@gmail.com>.
Hi Peter,

You can use RaspberryPico board, but I think it is a good idea to
create a new directory to Pimoroni Tiny2040 on NuttX because the
RaspberryPico could have different hardware features and different
pins usage.

So, if you create a new directory entry to Tiny2040 you can submit it
to mainline and let more people to use it.

BR,

Alan

On 1/7/22, Kalbus, Peter <pt...@mailbox.org.invalid> wrote:
> Corrected subject …
>
> /Piet
>
>
>> Am 07.01.2022 um 14:55 schrieb Kalbus, Peter <pt...@mailbox.org.invalid>:
>>
>> Hi,
>>
>> I owned a Pimoroni Tiny2040 device.
>> It’s an RP2040 based device with some interesting features:
>>
>>  + small form factor (18x21.3mm)
>>  + 8 MByte Flash
>>  + RGB LED
>>  + Reset Button
>>  + USB-C
>>  - reduced pin-header
>>
>> I’d like to add the device based on the available RP2040 support.
>> It’s basically working, but the specifics are missing.
>>
>>
>> My plan is to re-use the code from
>>
>>  'boards/arm/rp2040/raspberrypi-pico‘ and
>>
>> and add the support in
>>
>>  'boards/arm/rp2040/pimoroni-tiny2040‘.
>>
>>
>> Anything wrong with this approach?
>> Someone already working on this device?
>>
>>
>> Regards
>> Piet
>>
>

Re: %HOME% on Win32 (Re: ra_dav compression question)

Posted by Chris Hecker <ch...@d6.com>.
>Exactly. Unix ports. Subversion on Windows is _not_ a Unix port, believe
>it or not.

Sure, that's why I put the parenthetical comment in there.  My point is 
there's going to be a high correlation between people who have %HOME% 
defined and people who use svn.exe.  But anyway, I'm not saying to break 
the old way, I'm saying to check for the new way in addition.  I don't 
think it will be much code.  And, as a side benefit, the 8 zillion comments 
in the code referring to ~/.subversion will suddenly be optionally 
applicable to Win32 (since you're _not_ a unix port, this might appeal to 
you :).

>You can already change your %APPDATA% setting to, say, a subdirectory of
>your %HOME%. So you can get almost exactly what you want without
>changing a single line of SVN code.

This will destroy the universe, I'm fairly sure.  Windows is really bad 
about that kind of thing.  I've never tried it, however.  Have you?  But 
even if it did work, it's not what you want; you don't want a huge slew of 
random windows apps writing into your home directory, you want your 
"unix-like" command line tools to do so.  So, this should be a subversion 
specific thing, I think.

>Note that
>%HOME% is not a standard env. var on Windows, so you're proposing a
>"new" variable in any case. :-)
>However, Subversion also _creates_ that directory; how does it know
>which you prefer?

Sorry, I wasn't clear.  I was saying that if it finds that directory is not 
there, it will check for %HOME%/.subversion as well.  That's all.  So, most 
of the time it will just use %APPDATA% for people who don't care, but if 
somebody does then they'll move the directory and svn will do the right thing.

 From looking at it briefly, it'll be very little code to make this change, 
only a few lines in one function (svn_config__user_config_path) and one 
line in the config_impl.h header to do it right.

>That won't work, .lnk files are handled by the Explorer, not the
>filesystem. However, on Win2k and above, you can make
>%APPDATA%\Subversion a junction to anywhere you like -- again, not
>having to change a single line of code in SVN.

Except junctions aren't very well documented or supported and there are 
reports of filesystem weirdness when using them.  The .lnk suggestion was 
because a number of windows apps are starting to parse .lnk as a symbolic 
link transparently since junctions aren't first class citizens.  It's not 
hard, but it is more windows cruft.  The directory check is way simpler and 
cleaner, I think.

Again, I'm happy to write the 10 lines of code and test them and send a 
patch to the HEAD, I just want to know it will be considered before I do 
that.  If you're absolutely not interested I won't bother.  However, I 
think the feature is sound and this is the best of the options.

Chris



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: Problem to parse a PDF document

Posted by Timo Boehme <ti...@ontochem.com>.
Dear Pierre Huttin,

Am 14.06.2012 10:07, schrieb pierre@huttin.com:
> Many thanks, I have attached the file to the issue.

Thanks.

> Now it work fine for this kind of documents, but I have a side effect
> on other documents, who works fine in the past.
>
> I receive the following error message.
>
> Caused by: java.io.IOException: Error: Expected an integer type,
> actual='xref'
> org.apache.pdfbox.pdfparser.BaseParser.readInt(BaseParser.java:1541)
> org.apache.pdfbox.pdfparser.NonSequentialPDFParser.parseXrefObjStream(NonSequentialPDFParser.java:354)
> org.apache.pdfbox.pdfparser.NonSequentialPDFParser.initialParse(NonSequentialPDFParser.java:266)
> org.apache.pdfbox.pdfparser.NonSequentialPDFParser.parse(NonSequentialPDFParser.java:574)
> org.apache.pdfbox.pdmodel.PDDocument.loadNonSeq(PDDocument.java:1124)
> org.apache.pdfbox.pdmodel.PDDocument.loadNonSeq(PDDocument.java:1107)
>
> If I use the PDDocument.load() method I receive this warning message :
>
> 14 juin 2012 09:58:30 org.apache.pdfbox.pdfparser.XrefTrailerResolver
> setStartxref
> ATTENTION: Did not found XRef object at specified startxref position
> 173
>
> but the document is correctly loaded by PDFBox.

As I see it the document is broken because the offset specified in 
startxref does not point to start of xref section. Since 
NonSequentialPDFParser currently has only a few options to recover from 
parsing problems it stops throwing an exception. With PDDocument.load 
you use the standard PDFParser which can better cope with corrupt xref 
definition (ignoring it and detecting start of objects by itself) but 
has other problems because it does not use xref definitions in some 
cases. Thus to get the best of both you should first use 
PDDocument.loadNonSeq() and if this fails (exception) try again (fall 
back) with PDDocument.load().

> I have a problemn for the sample file, because it contains some
> confidential datas in it.

It is quite clear to me that startxref is wrong. However you could send 
only the tail (which contains the 'startxref' and following lines) and 
the first 220 byte of the file (according to the exception xref is 
supposed to start at 173). With this information which shouldn't contain 
any confidential data I could verify the diagnose.


Best regards,
Timo

> On Thu, 14 Jun 2012 00:23:49 +0200, Timo Boehme
> <ti...@ontochem.com>  wrote:
>> Am 13.06.2012 14:02, schrieb pierre@huttin.com:
>>> Sorry,
>>>
>>> apparently the pdf was not correctly attached to the previous mail, I
>>> just zip it and re-attach it.
>>>
>>> Pierre Huttin
>>
>> With resolving PDFBOX-1099
>> (https://issues.apache.org/jira/browse/PDFBOX-1099) the page count is
>> correct with both parsers (NonSequentialPDFParser and PDFParser).
>>
>> For testing purposes it would be helpful to have your example PDF
>> associated with PDFBOX-1099. Could you upload it to this issue (and
>> tick the 'Grant license to ASF for inclusion in ASF works (as per the
>> Apache License §5)' or give permission to do so with your file
>> attached to previous email with license grant?
>>
>>
>> Best regards,
>> Timo
>>
>>>
>>> On Wed, 13 Jun 2012 13:56:50 +0200,<pi...@huttin.com>   wrote:
>>>> Hello,
>>>>
>>>> I have some trouble with documents the library is not not able to
>>>> retreive the number of pages and load them into the list using
>>>> PDDocument.getDocumentCatalog().getAllPages() method.
>>>>
>>>> The pdf file and the java code to retreive the number of pages are
>>>> attached to this mail. apparently it's look like the PDFParser do not
>>>> read correctly the /Pages object the ref of pages are "8 0" and "19
>>>> 0".
>>>>
>>>> I open the document correctly with adobe reader and itextrups, both
>>>> retrieve the correct number of pages : 2.
>>>>
>>>> I try to run my code using the version 1.7.0 of PDFBox
>>>>
>>>> Thanks in advance for your help.
>>>>
>>>> Best regards
>>>>
>>>> Pierre Huttin
>


-- 

  Timo Boehme
  OntoChem GmbH
  H.-Damerow-Str. 4
  06120 Halle/Saale
  T: +49 345 4780474
  F: +49 345 4780471
  timo.boehme@ontochem.com

_____________________________________________________________________

  OntoChem GmbH
  Geschäftsführer: Dr. Lutz Weber
  Sitz: Halle / Saale
  Registergericht: Stendal
  Registernummer: HRB 215461
_____________________________________________________________________


Re: [users@httpd] Setting up proxing and stopping POST SMTP

Posted by Robert Moskowitz <rg...@htt-consult.com>.
At 04:19 PM 3/27/2003 +1100, Zac Stevens wrote:

>It doesn't work because the argument to <Directory> is a path on the
>filesystem.  This is clearly explained in the documentation:
>
>http://httpd.apache.org/docs-2.0/mod/core.html#directory
>
>You should be able to put the LimitExcept rules into your <Proxy *> block.
>
>I would suggest reading the documentation on section containers for
>additional information:
>
>http://httpd.apache.org/docs-2.0/sections.html

Oh, I read it all, and the text I used was lifted right out of the 
docs.  Of course now I can't find that URL (did not bookmark it) to show a 
doc error :)

And I have spent considerable time with the docs.  I guess I am just not 
groking the doc layout.  I cannot find any thing on LimitExcept.  Did a 
search.  Plowed around.

Is this even the right content to stop.  Or is the nature of the attack so 
varied that I will spend a month setting up all sorts of filters and the 
only effective filter is to only allow proxying from trusted IP addresses?




---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Setting up proxing and stopping POST SMTP

Posted by Zac Stevens <zt...@cryptocracy.com>.
On Thu, Mar 27, 2003 at 12:09:15AM -0500, Robert Moskowitz wrote:
> But I am still concerned that someone might find a way to use one of my IP 
> addresses to do the POST SMTP, and would like to figure out why
> 
> <Directory proxy *>

<--snip-->

> is bad syntax and if this is the right syntax....

It doesn't work because the argument to <Directory> is a path on the
filesystem.  This is clearly explained in the documentation:

http://httpd.apache.org/docs-2.0/mod/core.html#directory

You should be able to put the LimitExcept rules into your <Proxy *> block.

I would suggest reading the documentation on section containers for
additional information:

http://httpd.apache.org/docs-2.0/sections.html

Cheers,


Zac

---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


RE: [users@httpd] Setting up proxing and stopping POST SMTP

Posted by Robert Moskowitz <rg...@htt-consult.com>.
At 06:23 PM 3/26/2003 -0500, Jeff Cohen wrote:
>Are you talking about a Proxy Server or the Proxy functionality in the
>Apache Web Server?

Proxy functionality in the Apache Web Server.  I only have the one 
system.  This is a small operation despite the /26 allocation.

Anyway, I now have the basic proxying working.  See my previous note.  It 
is appalling how many GETs there are to access www.cnn.com!

Also I believe the proxy limiting to my address range also stops the POST 
SMTP proxies:

segfault.monkeys.com - - [26/Mar/2003:23:40:06 -0500] "CONNECT 
outland.monkeys.com:25 HTTP/1.0" 403 309

But I am still concerned that someone might find a way to use one of my IP 
addresses to do the POST SMTP, and would like to figure out why

<Directory proxy *>
         <LimitExcept GET HEAD POST>
         Order deny,allow
         Deny from all
         </LimitExcept>
</Directory>

is bad syntax and if this is the right syntax....



---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


RE: [users@httpd] Setting up proxing and stopping POST SMTP

Posted by Jeff Cohen <li...@gej-it.com>.
Are you talking about a Proxy Server or the Proxy functionality in the
Apache Web Server?
Jeff Cohen

> -----Original Message-----
> From: Robert Moskowitz [mailto:rgm@htt-consult.com]
> Sent: Wednesday, March 26, 2003 10:01 AM
> To: users@httpd.apache.org; users@httpd.apache.org
> Subject: Re: [users@httpd] Setting up proxing and stopping POST SMTP
> 
> At 09:51 AM 3/26/2003 -0500, Jeff Cohen wrote:
> >hi Robert,
> >
> >What is the exact Location that you want to be proxied?
> >from what I understand you are trying to have http://yourdomain.com/proxy
> >to be filtered, is that right?
> 
> I am trying to set up the server so that systems in my domain can use it
as
> a proxy to get to out.
> 
> In particular, when I am on the road, I SSH back to my domain and run
SMTP,
> POP3, and HTTP through the tunnel.
> 
> Of course my SMTP and POP3 servers are local.  This gives those unsecured
> protocols needed security.
> 
> For HTTP, my desire is to not expose what I do (like bank access and the
> like) on wireless networks and the like.  So I set up my browse to proxy
to
> Localhost, and SSH tunnels port 80 to my web server.  This was working
with
> 1.3.  Now I want it working with 2.0.
> 
> And with proxy on, not to let the bas*ards relay SMTP spam through my
> server....
> 
> 
> 
> 
> ---------------------------------------------------------------------
> The official User-To-User support forum of the Apache HTTP Server Project.
> See <URL:http://httpd.apache.org/userslist.html> for more info.
> To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
>    "   from the digest: users-digest-unsubscribe@httpd.apache.org
> For additional commands, e-mail: users-help@httpd.apache.org


---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [users@httpd] Setting up proxing and stopping POST SMTP

Posted by Robert Moskowitz <rg...@htt-consult.com>.
At 09:51 AM 3/26/2003 -0500, Jeff Cohen wrote:
>hi Robert,
>
>What is the exact Location that you want to be proxied?
>from what I understand you are trying to have http://yourdomain.com/proxy
>to be filtered, is that right?

I am trying to set up the server so that systems in my domain can use it as 
a proxy to get to out.

In particular, when I am on the road, I SSH back to my domain and run SMTP, 
POP3, and HTTP through the tunnel.

Of course my SMTP and POP3 servers are local.  This gives those unsecured 
protocols needed security.

For HTTP, my desire is to not expose what I do (like bank access and the 
like) on wireless networks and the like.  So I set up my browse to proxy to 
Localhost, and SSH tunnels port 80 to my web server.  This was working with 
1.3.  Now I want it working with 2.0.

And with proxy on, not to let the bas*ards relay SMTP spam through my 
server....




---------------------------------------------------------------------
The official User-To-User support forum of the Apache HTTP Server Project.
See <URL:http://httpd.apache.org/userslist.html> for more info.
To unsubscribe, e-mail: users-unsubscribe@httpd.apache.org
   "   from the digest: users-digest-unsubscribe@httpd.apache.org
For additional commands, e-mail: users-help@httpd.apache.org


Re: [WSDL2C] Bug in C - ServerStubs - Unexpected Subelement

Posted by Dimuthu Gamage <di...@gmail.com>.
hi,
    This is an error in the serialize logic and the deserialize logic.
The problem arises because adb structures are generated for
all the elements (+for complex types), and there are two ways to
define an element

    1. <xs:element name="matrixAdd" type="ns1:matrixAddType">
       </xs:element>
       <xs:complexType name="matrixAddType">
           <xs:sequence>
              <xs:element minOccurs="0" name="param0" nillable="true"
type="ns1:Matrix"/>
              <xs:element minOccurs="0" name="param1" nillable="true"
type="ns1:Matrix"/>
           </xs:sequence>
        </xs:complexType>

and

    2. <xs:element name="matrixAdd">
                <xs:complexType>
                    <xs:sequence>
                        <xs:element minOccurs="0" name="param0"
nillable="true" type="ns1:Matrix"/>
                        <xs:element minOccurs="0" name="param1"
nillable="true" type="ns1:Matrix"/>
                    </xs:sequence>
                </xs:complexType>
       </xs:element>


If the above mentioned "if(has_parent)" check is not done, in the
first case there are two matrixAdd element are
generated in the xml one inside other. So to distinguish these two
cases, we have to use the @anon flag in the
adb templates.

I have done several tests and seems the changes are working.

used wsdls
used wsdls

Adder.wsdl - all outer elements have anonymous types
Adder2.wsdl - Outer elements have anonyous and named types
Adder3.wsdl - All having anonymous type

Attachment consists of the test code for (both stub and skel)

step1 - when if(has_parent) is used in line 1575 -- all outer elements
have anonymous types
step2 - when if(!has_parent) is used
step3 - when the if is removed
step4 - when the if is removed and MatrixAdd has a named type (not a
anonymous type)

step5 - when the @anon is used with all outer elements have anonymous types
step6 - when the @anon is used with outer elements have anonymous
types and named types
step7 - when the @anon is used with all having anonymous types -Adder3.wsdl


Thanks
Dimuthu

On 8/9/07, Dr. Florian Steinborn <fl...@drb.insel.de> wrote:
> Hi Samisa
>
> > Looks like a logic error in the generated code. I could change the style
> > sheet logic to include this.
> > However, I would like to have a test case to ensure the change works.
> >
>
> attached you find a WSDL and the generated C code with the error. This
> could make it easier to see your change works.
>
> Thanks,
> Flori
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: axis-c-user-unsubscribe@ws.apache.org
> For additional commands, e-mail: axis-c-user-help@ws.apache.org
>
>

Re: [WSDL2C] Bug in C - ServerStubs - Unexpected Subelement

Posted by "Dr. Florian Steinborn" <fl...@drb.insel.de>.
Hi Samisa

> Looks like a logic error in the generated code. I could change the style  
> sheet logic to include this.
> However, I would like to have a test case to ensure the change works.
>

attached you find a WSDL and the generated C code with the error. This  
could make it easier to see your change works.

Thanks,
Flori

Re: [WSDL2C] Bug in C - ServerStubs - Unexpected Subelement

Posted by Samisa Abeysinghe <sa...@wso2.com>.
Looks like a logic error in the generated code. I could change the style 
sheet logic to include this.
However, I would like to have a test case to ensure the change works.

Samisa...

Dr. Florian Steinborn wrote:
> Hi,
>
> just to let you know what we did to temporarily bypass the problem:
>
> We added just one exclamation mark and it works:
>
> [...]
>
>> if(has_parent)
>>    axutil_stream_write(stream, env, start_input_str,    
>> start_input_str_len);
>
> changed to
>
> if(!has_parent)
>    axutil_stream_write(stream, env, start_input_str,
>    start_input_str_len);
>
> [...]
>
> Needless to say this is editing the generated source - the way we did 
> not want to go.
>
> Thanks and greetings from Berlin,
>
> Flori
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: axis-c-user-unsubscribe@ws.apache.org
> For additional commands, e-mail: axis-c-user-help@ws.apache.org
>
>


-- 
Samisa Abeysinghe : <a href="http://wso2.org/projects/wsf/c">WSO2 Web Services Framework/C - Open source C library for providing and consuming Web services</a>


---------------------------------------------------------------------
To unsubscribe, e-mail: axis-c-user-unsubscribe@ws.apache.org
For additional commands, e-mail: axis-c-user-help@ws.apache.org


Re: strange get_range_slices behaviour v0.6.1

Posted by Jonathan Ellis <jb...@gmail.com>.
On Tue, May 4, 2010 at 4:17 PM, aaron <aa...@thelastpickle.com> wrote:
> I was noticing cases under the random partitioner where keys I expected to
> be returned
> were not. Can you give a little advice on the expected behaviour of
> get_range_slices
> with the RP and I'll try to write a JUnit for it. e.g. Is it essentially
> the same as
> under the OPP but order is undefined?

Right.  That's why it warns you when you specify an end key under RP;
I can't think of a scenario where that would make sense.

-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

Re: strange get_range_slices behaviour v0.6.1

Posted by aaron <aa...@thelastpickle.com>.
Thanks  Jonathan. 

After looking at the Lucandra code I realized my confusions has to do with
get_range_slices 
and the RandomPartitioner. When I switched to the OPP I got the expected
behaviour. 


I was noticing cases under the random partitioner where keys I expected to
be returned 
were not. Can you give a little advice on the expected behaviour of
get_range_slices 
with the RP and I'll try to write a JUnit for it. e.g. Is it essentially
the same as 
under the OPP but order is undefined? 

Thanks
Aaron


On Mon, 3 May 2010 10:27:37 -0500, Jonathan Ellis <jb...@gmail.com>
wrote:
> Util.range returns a Range object which is end-exclusive.  (You want
> "Bounds" for end-inclusive.)
> 
> On Sun, May 2, 2010 at 7:19 AM, aaron morton <aa...@thelastpickle.com>
> wrote:
>> He there, I'm still getting odd behavior with get_range_slices. I've
>> created
>> a JUNIT test that illustrates the case.
>> Could someone take a look and either let me know where my understanding
>> is
>> wrong or is this is a real issue?
>>
>>
>> I added the following to ColumnFamilyStoreTest.java
>>
>>
>>    private ColumnFamilyStore insertKey1Key2Key3() throws IOException,
>> ExecutionException, InterruptedException
>>    {
>>        List<RowMutation> rms = new LinkedList<RowMutation>();
>>        RowMutation rm;
>>        rm = new RowMutation("Keyspace2", "key1".getBytes());
>>        rm.add(new QueryPath("Standard1", null,
"Column1".getBytes()),
>> "asdf".getBytes(), 0);
>>        rms.add(rm);
>>
>>        rm = new RowMutation("Keyspace2", "key2".getBytes());
>>        rm.add(new QueryPath("Standard1", null,
"Column1".getBytes()),
>> "asdf".getBytes(), 0);
>>        rms.add(rm);
>>
>>        rm = new RowMutation("Keyspace2", "key3".getBytes());
>>        rm.add(new QueryPath("Standard1", null,
"Column1".getBytes()),
>> "asdf".getBytes(), 0);
>>        rms.add(rm);
>>        return Util.writeColumnFamily(rms);
>>    }
>>
>>
>>    @Test
>>    public void testThreeKeyRangeAll() throws IOException,
>> ExecutionException, InterruptedException
>>    {
>>        ColumnFamilyStore cfs = insertKey1Key2Key3();
>>
>>        IPartitioner p = StorageService.getPartitioner();
>>        RangeSliceReply result =
>> cfs.getRangeSlice(ArrayUtils.EMPTY_BYTE_ARRAY,
>>                                                
>>   Util.range(p, "key1",
>> "key3"),
>>                                                
>>   10,
>>                                                
>>   null,
>>
>> Arrays.asList("Column1".getBytes()));
>>        assertEquals(3, result.rows.size());
>>    }
>>
>>    @Test
>>    public void testThreeKeyRangeSkip1() throws IOException,
>> ExecutionException, InterruptedException
>>    {
>>        ColumnFamilyStore cfs = insertKey1Key2Key3();
>>
>>        IPartitioner p = StorageService.getPartitioner();
>>        RangeSliceReply result =
>> cfs.getRangeSlice(ArrayUtils.EMPTY_BYTE_ARRAY,
>>                                                
>>   Util.range(p, "key2",
>> "key3"),
>>                                                
>>   10,
>>                                                
>>   null,
>>
>> Arrays.asList("Column1".getBytes()));
>>        assertEquals(2, result.rows.size());
>>    }
>>
>> Running this with "ant test" the partial output is....
>>
>>    [junit] Testsuite: org.apache.cassandra.db.ColumnFamilyStoreTest
>>    [junit] Tests run: 7, Failures: 2, Errors: 0, Time elapsed: 1.405
>> sec
>>    [junit]
>>    [junit] Testcase:
>> testThreeKeyRangeAll(org.apache.cassandra.db.ColumnFamilyStoreTest):
>>  FAILED
>>    [junit] expected:<3> but was:<2>
>>    [junit] junit.framework.AssertionFailedError: expected:<3> but
>> was:<2>
>>    [junit]     at
>>
org.apache.cassandra.db.ColumnFamilyStoreTest.testThreeKeyRangeAll(ColumnFamilyStoreTest.java:170)
>>    [junit]
>>    [junit]
>>    [junit] Testcase:
>> testThreeKeyRangeSkip1(org.apache.cassandra.db.ColumnFamilyStoreTest):
>>  FAILED
>>    [junit] expected:<2> but was:<1>
>>    [junit] junit.framework.AssertionFailedError: expected:<2> but
>> was:<1>
>>    [junit]     at
>>
org.apache.cassandra.db.ColumnFamilyStoreTest.testThreeKeyRangeSkip1(ColumnFamilyStoreTest.java:184)
>>    [junit]
>>    [junit]
>>    [junit] Test org.apache.cassandra.db.ColumnFamilyStoreTest FAILED
>>
>>
>> Any help appreciated.
>>
>> Aaron
>>
>>
>> On 27 Apr 2010, at 09:38, aaron wrote:
>>
>>>
>>> I've broken this case down further to some pyton code that works
against
>>> the thrift generated
>>> client and am still getting the same odd results. With keys obejct1,
>>> object2 and object3 an
>>> open ended get_range_slice starting with "object1" only returns object1
>>> and
>>> 2.
>>>
>>> I'm guessing that I've got something wrong or my expectation of how
>>> get_range_slice works
>>> is wrong, but I cannot see where I've gone wrong. Any help would be
>>> appreciated.
>>>
>>> They python code to add and read keys is below, assumes a
>>> Cassandra.Client
>>> connection.
>>>
>>> import time
>>> from cassandra import Cassandra,ttypes
>>> from thrift import Thrift
>>> from thrift.protocol import TBinaryProtocol
>>> from thrift.transport import TSocket, TTransport
>>>
>>>
>>> def add_data(conn):
>>>
>>>   col_path = ttypes.ColumnPath(column_family="Standard1",
>>> column="col_name")
>>>   consistency = ttypes.ConsistencyLevel.QUORUM
>>>
>>>   for key in ["object1", "object2", "object3"]:
>>>       conn.insert("Keyspace1", key, col_path, "col_value",
>>>           int(time.time() * 1e6), consistency)
>>>   return
>>>
>>> def read_range(conn, start_key, end_key):
>>>
>>>   col_parent = ttypes.ColumnParent(column_family="Standard1")
>>>
>>>   predicate = ttypes.SlicePredicate(column_names=["col_name"])
>>>   range = ttypes.KeyRange(start_key=start_key, end_key=end_key,
>>> count=1000)
>>>   consistency = ttypes.ConsistencyLevel.QUORUM
>>>
>>>   return conn.get_range_slices("Keyspace1", col_parent,
>>>               predicate, range, consistency)
>>>
>>>
>>> Below is the result of calling read_range with different start values.
>>> I've
>>> also included
>>> the debug log for each call, the line starting with "reading
>>> RangeSliceCommand" seems to
>>> show that key hash for "object2" is greater than "object3".
>>>
>>> #expect to return objects 1,2 and 3
>>>
>>> In [37]: cass_test.read_range(conn, "object1", "")
>>> Out[37]:
>>>
>>>
[KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272315595268837,
>>> name='col_name', value='col_value'), super_column=None)],
>>> key='object1'),
>>>
>>>
KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272315595272693,
>>> name='col_name', value='col_value'), super_column=None)],
>>> key='object3')]
>>>
>>> DEBUG 09:29:59,791 range_slice
>>> DEBUG 09:29:59,791 RangeSliceCommand{keyspace='Keyspace1',
>>> column_family='Standard1', super_column=null,
>>> predicate=SlicePredicate(column_names:[[B@257b40fe]),
>>> range=[121587881847328893689247922008234581399,0], max_keys=1000}
>>> DEBUG 09:29:59,791 Adding to restricted ranges
>>> [121587881847328893689247922008234581399,0] for
>>>
>>>
(75349581786326521367945210761838448174,75349581786326521367945210761838448174]
>>> DEBUG 09:29:59,791 reading RangeSliceCommand{keyspace='Keyspace1',
>>> column_family='Standard1', super_column=null,
>>> predicate=SlicePredicate(column_names:[[B@257b40fe]),
>>> range=[121587881847328893689247922008234581399,0], max_keys=1000} from
>>> 1528@localhost/127.0.0.1
>>> DEBUG 09:29:59,791 Sending RangeSliceReply{rows=Row(key='object1',
>>> cf=ColumnFamily(Standard1
>>> [636f6c5f6e616d65:false:9@1272315595268837,])),Row(key='object3',
>>> cf=ColumnFamily(Standard1
>>> [636f6c5f6e616d65:false:9@1272315595272693,]))}
>>> to 1528@localhost/127.0.0.1
>>> DEBUG 09:29:59,791 Processing response on a callback from
>>> 1528@localhost/127.0.0.1
>>> DEBUG 09:29:59,791 range slices read object1
>>> DEBUG 09:29:59,791 range slices read object3
>>>
>>>
>>> In [38]: cass_test.read_range(conn, "object2", "")
>>> Out[38]:
>>>
>>>
[KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272315595271798,
>>> name='col_name', value='col_value'), super_column=None)],
>>> key='object2'),
>>>
>>>
KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272315595268837,
>>> name='col_name', value='col_value'), super_column=None)],
>>> key='object1'),
>>>
>>>
KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272315595272693,
>>> name='col_name', value='col_value'), super_column=None)],
>>> key='object3')]
>>>
>>> DEBUG 09:34:48,133 range_slice
>>> DEBUG 09:34:48,133 RangeSliceCommand{keyspace='Keyspace1',
>>> column_family='Standard1', super_column=null,
>>> predicate=SlicePredicate(column_names:[[B@7966340c]),
>>> range=[28312518014678916505369931620527723964,0], max_keys=1000}
>>> DEBUG 09:34:48,133 Adding to restricted ranges
>>> [28312518014678916505369931620527723964,0] for
>>>
>>>
(75349581786326521367945210761838448174,75349581786326521367945210761838448174]
>>> DEBUG 09:34:48,133 reading RangeSliceCommand{keyspace='Keyspace1',
>>> column_family='Standard1', super_column=null,
>>> predicate=SlicePredicate(column_names:[[B@7966340c]),
>>> range=[28312518014678916505369931620527723964,0], max_keys=1000} from
>>> 1810@localhost/127.0.0.1
>>> DEBUG 09:34:48,133 Sending RangeSliceReply{rows=Row(key='object2',
>>> cf=ColumnFamily(Standard1
>>> [636f6c5f6e616d65:false:9@1272315595271798,])),Row(key='object1',
>>> cf=ColumnFamily(Standard1
>>> [636f6c5f6e616d65:false:9@1272315595268837,])),Row(key='object3',
>>> cf=ColumnFamily(Standard1
>>> [636f6c5f6e616d65:false:9@1272315595272693,]))}
>>> to 1810@localhost/127.0.0.1
>>> DEBUG 09:34:48,133 Processing response on a callback from
>>> 1810@localhost/127.0.0.1
>>> DEBUG 09:34:48,133 range slices read object2
>>> DEBUG 09:34:48,133 range slices read object1
>>> DEBUG 09:34:48,133 range slices read object3
>>>
>>>
>>> In [39]: cass_test.read_range(conn, "object3", "")
>>> Out[39]:
>>>
>>>
[KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272315595272693,
>>> name='col_name', value='col_value'), super_column=None)],
>>> key='object3')]
>>>
>>> DEBUG 09:35:26,090 range_slice
>>> DEBUG 09:35:26,090 RangeSliceCommand{keyspace='Keyspace1',
>>> column_family='Standard1', super_column=null,
>>> predicate=SlicePredicate(column_names:[[B@24e33e18]),
>>> range=[123092639156685888118746480803115294277,0], max_keys=1000}
>>> DEBUG 09:35:26,090 Adding to restricted ranges
>>> [123092639156685888118746480803115294277,0] for
>>>
>>>
(75349581786326521367945210761838448174,75349581786326521367945210761838448174]
>>> DEBUG 09:35:26,090 reading RangeSliceCommand{keyspace='Keyspace1',
>>> column_family='Standard1', super_column=null,
>>> predicate=SlicePredicate(column_names:[[B@24e33e18]),
>>> range=[123092639156685888118746480803115294277,0], max_keys=1000} from
>>> 1847@localhost/127.0.0.1
>>> DEBUG 09:35:26,090 Sending RangeSliceReply{rows=Row(key='object3',
>>> cf=ColumnFamily(Standard1
>>> [636f6c5f6e616d65:false:9@1272315595272693,]))}
>>> to 1847@localhost/127.0.0.1
>>> DEBUG 09:35:26,090 Processing response on a callback from
>>> 1847@localhost/127.0.0.1
>>> DEBUG 09:35:26,090 range slices read object3
>>>
>>>
>>>
>>> thanks
>>> Aaron
>>>
>>>
>>>
>>>
>>> On Sun, 25 Apr 2010 20:23:05 -0700, aaron <aa...@the-mortons.org>
wrote:
>>>>
>>>> I've been looking at the get_range_slices feature and have found some
>>>> odd
>>>> behaviour I do not understand. Basically the keys returned in a range
>>>
>>> query
>>>>
>>>> do not match what I would expect to see. I think it may have something
>>>> to
>>>> do with the ordering of keys that I don't know about, but I'm just
>>>> guessing.
>>>>
>>>> On Cassandra v 0.6.1, single node local install; RandomPartitioner.
>>>> Using
>>>> Python and my own thin wrapper around the Thrift Python API.
>>>>
>>>> Step 1.
>>>>
>>>> Insert 3 keys into the "Standard 1" column family, called "object 1"
>>>> "object 2" and "object 3", each with a single column called 'name'
with
>>>> a
>>>> value like 'object1'
>>>>
>>>> Step 2.
>>>>
>>>> Do a get_range_slices call in the "Standard 1" CF, for column names
>>>> ["name"] with start_key "object1" and end_key "object3". I expect to
>>>> see
>>>> three results, but I only see results for object1 and object2. Below
>>>> are
>>>> the thrift types I'm passing into the Cassandra.Client object...
>>>>
>>>> - ColumnParent(column_family='Standard1', super_column=None)
>>>> - SlicePredicate(column_names=['name'], slice_range=None)
>>>> - KeyRange(end_key='object3', start_key='object1', count=4000,
>>>> end_token=None, start_token=None)
>>>>
>>>> and the output
>>>>
>>>>
>>>
>>>
[KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272250258810439,
>>>>
>>>> name='name', value='object1'), super_column=None)], key='object1'),
>>>>
>>>
>>>
KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272250271620362,
>>>>
>>>> name='name', value='object3'), super_column=None)], key='object3')]
>>>>
>>>> Step 3.
>>>>
>>>> Modify the get_range_slices call, so the start_key is object2. In this
>>>
>>> case
>>>>
>>>> I expect to see 2 rows returned, but I get 3. Thrift args and return
>>>> are
>>>> below...
>>>>
>>>> - ColumnParent(column_family='Standard1', super_column=None)
>>>> - SlicePredicate(column_names=['name'], slice_range=None)
>>>> - KeyRange(end_key='object3', start_key='object2', count=4000,
>>>> end_token=None, start_token=None)
>>>>
>>>> and the output
>>>>
>>>>
>>>
>>>
[KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272250265190715,
>>>>
>>>> name='name', value='object2'), super_column=None)], key='object2'),
>>>>
>>>
>>>
KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272250258810439,
>>>>
>>>> name='name', value='object1'), super_column=None)], key='object1'),
>>>>
>>>
>>>
KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272250271620362,
>>>>
>>>> name='name', value='object3'), super_column=None)], key='object3')]
>>>>
>>>>
>>>>
>>>> Can anyone explain these odd results? As I said I've got my own python
>>>> wrapper around the client, so I may be doing something wrong. But I've
>>>> pulled out the thrift objects and they go in and out of the thrift
>>>> Cassandra.Client, so I think I'm ok. (I have not noticed a systematic
>>>> problem with my wrapper).
>>>>
>>>> On a more general note, is there information on the sort order of keys
>>>
>>> when
>>>>
>>>> using key ranges? I'm guessing the hash of the keys is compared and I
>>>> wondering if the hash's of the keys maintain the order of the original
>>>> values? Also I assume the order is byte order, rather than ascii or
>>>> utf8.
>>>
>>>>
>>>> I was experimenting with the difference between column slicing and key
>>>> slicing. In my I could write the keys in as column names (they are in
>>>> buckets) as well and slice there first, then use the results to to
make
>>>> a
>>>> multi key get. I'm trying to support features like, get me all the
data
>>>> where the key starts with "foo.bar".
>>>>
>>>> Thanks for the fun project.
>>>>
>>>> Aaron
>>
>>

Re: strange get_range_slices behaviour v0.6.1

Posted by Jonathan Ellis <jb...@gmail.com>.
Util.range returns a Range object which is end-exclusive.  (You want
"Bounds" for end-inclusive.)

On Sun, May 2, 2010 at 7:19 AM, aaron morton <aa...@thelastpickle.com> wrote:
> He there, I'm still getting odd behavior with get_range_slices. I've created
> a JUNIT test that illustrates the case.
> Could someone take a look and either let me know where my understanding is
> wrong or is this is a real issue?
>
>
> I added the following to ColumnFamilyStoreTest.java
>
>
>    private ColumnFamilyStore insertKey1Key2Key3() throws IOException,
> ExecutionException, InterruptedException
>    {
>        List<RowMutation> rms = new LinkedList<RowMutation>();
>        RowMutation rm;
>        rm = new RowMutation("Keyspace2", "key1".getBytes());
>        rm.add(new QueryPath("Standard1", null, "Column1".getBytes()),
> "asdf".getBytes(), 0);
>        rms.add(rm);
>
>        rm = new RowMutation("Keyspace2", "key2".getBytes());
>        rm.add(new QueryPath("Standard1", null, "Column1".getBytes()),
> "asdf".getBytes(), 0);
>        rms.add(rm);
>
>        rm = new RowMutation("Keyspace2", "key3".getBytes());
>        rm.add(new QueryPath("Standard1", null, "Column1".getBytes()),
> "asdf".getBytes(), 0);
>        rms.add(rm);
>        return Util.writeColumnFamily(rms);
>    }
>
>
>    @Test
>    public void testThreeKeyRangeAll() throws IOException,
> ExecutionException, InterruptedException
>    {
>        ColumnFamilyStore cfs = insertKey1Key2Key3();
>
>        IPartitioner p = StorageService.getPartitioner();
>        RangeSliceReply result =
> cfs.getRangeSlice(ArrayUtils.EMPTY_BYTE_ARRAY,
>                                                   Util.range(p, "key1",
> "key3"),
>                                                   10,
>                                                   null,
>
> Arrays.asList("Column1".getBytes()));
>        assertEquals(3, result.rows.size());
>    }
>
>    @Test
>    public void testThreeKeyRangeSkip1() throws IOException,
> ExecutionException, InterruptedException
>    {
>        ColumnFamilyStore cfs = insertKey1Key2Key3();
>
>        IPartitioner p = StorageService.getPartitioner();
>        RangeSliceReply result =
> cfs.getRangeSlice(ArrayUtils.EMPTY_BYTE_ARRAY,
>                                                   Util.range(p, "key2",
> "key3"),
>                                                   10,
>                                                   null,
>
> Arrays.asList("Column1".getBytes()));
>        assertEquals(2, result.rows.size());
>    }
>
> Running this with "ant test" the partial output is....
>
>    [junit] Testsuite: org.apache.cassandra.db.ColumnFamilyStoreTest
>    [junit] Tests run: 7, Failures: 2, Errors: 0, Time elapsed: 1.405 sec
>    [junit]
>    [junit] Testcase:
> testThreeKeyRangeAll(org.apache.cassandra.db.ColumnFamilyStoreTest):
>  FAILED
>    [junit] expected:<3> but was:<2>
>    [junit] junit.framework.AssertionFailedError: expected:<3> but was:<2>
>    [junit]     at
> org.apache.cassandra.db.ColumnFamilyStoreTest.testThreeKeyRangeAll(ColumnFamilyStoreTest.java:170)
>    [junit]
>    [junit]
>    [junit] Testcase:
> testThreeKeyRangeSkip1(org.apache.cassandra.db.ColumnFamilyStoreTest):
>  FAILED
>    [junit] expected:<2> but was:<1>
>    [junit] junit.framework.AssertionFailedError: expected:<2> but was:<1>
>    [junit]     at
> org.apache.cassandra.db.ColumnFamilyStoreTest.testThreeKeyRangeSkip1(ColumnFamilyStoreTest.java:184)
>    [junit]
>    [junit]
>    [junit] Test org.apache.cassandra.db.ColumnFamilyStoreTest FAILED
>
>
> Any help appreciated.
>
> Aaron
>
>
> On 27 Apr 2010, at 09:38, aaron wrote:
>
>>
>> I've broken this case down further to some pyton code that works against
>> the thrift generated
>> client and am still getting the same odd results. With keys obejct1,
>> object2 and object3 an
>> open ended get_range_slice starting with "object1" only returns object1
>> and
>> 2.
>>
>> I'm guessing that I've got something wrong or my expectation of how
>> get_range_slice works
>> is wrong, but I cannot see where I've gone wrong. Any help would be
>> appreciated.
>>
>> They python code to add and read keys is below, assumes a Cassandra.Client
>> connection.
>>
>> import time
>> from cassandra import Cassandra,ttypes
>> from thrift import Thrift
>> from thrift.protocol import TBinaryProtocol
>> from thrift.transport import TSocket, TTransport
>>
>>
>> def add_data(conn):
>>
>>   col_path = ttypes.ColumnPath(column_family="Standard1",
>> column="col_name")
>>   consistency = ttypes.ConsistencyLevel.QUORUM
>>
>>   for key in ["object1", "object2", "object3"]:
>>       conn.insert("Keyspace1", key, col_path, "col_value",
>>           int(time.time() * 1e6), consistency)
>>   return
>>
>> def read_range(conn, start_key, end_key):
>>
>>   col_parent = ttypes.ColumnParent(column_family="Standard1")
>>
>>   predicate = ttypes.SlicePredicate(column_names=["col_name"])
>>   range = ttypes.KeyRange(start_key=start_key, end_key=end_key,
>> count=1000)
>>   consistency = ttypes.ConsistencyLevel.QUORUM
>>
>>   return conn.get_range_slices("Keyspace1", col_parent,
>>               predicate, range, consistency)
>>
>>
>> Below is the result of calling read_range with different start values.
>> I've
>> also included
>> the debug log for each call, the line starting with "reading
>> RangeSliceCommand" seems to
>> show that key hash for "object2" is greater than "object3".
>>
>> #expect to return objects 1,2 and 3
>>
>> In [37]: cass_test.read_range(conn, "object1", "")
>> Out[37]:
>>
>> [KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272315595268837,
>> name='col_name', value='col_value'), super_column=None)], key='object1'),
>>
>> KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272315595272693,
>> name='col_name', value='col_value'), super_column=None)], key='object3')]
>>
>> DEBUG 09:29:59,791 range_slice
>> DEBUG 09:29:59,791 RangeSliceCommand{keyspace='Keyspace1',
>> column_family='Standard1', super_column=null,
>> predicate=SlicePredicate(column_names:[[B@257b40fe]),
>> range=[121587881847328893689247922008234581399,0], max_keys=1000}
>> DEBUG 09:29:59,791 Adding to restricted ranges
>> [121587881847328893689247922008234581399,0] for
>>
>> (75349581786326521367945210761838448174,75349581786326521367945210761838448174]
>> DEBUG 09:29:59,791 reading RangeSliceCommand{keyspace='Keyspace1',
>> column_family='Standard1', super_column=null,
>> predicate=SlicePredicate(column_names:[[B@257b40fe]),
>> range=[121587881847328893689247922008234581399,0], max_keys=1000} from
>> 1528@localhost/127.0.0.1
>> DEBUG 09:29:59,791 Sending RangeSliceReply{rows=Row(key='object1',
>> cf=ColumnFamily(Standard1
>> [636f6c5f6e616d65:false:9@1272315595268837,])),Row(key='object3',
>> cf=ColumnFamily(Standard1 [636f6c5f6e616d65:false:9@1272315595272693,]))}
>> to 1528@localhost/127.0.0.1
>> DEBUG 09:29:59,791 Processing response on a callback from
>> 1528@localhost/127.0.0.1
>> DEBUG 09:29:59,791 range slices read object1
>> DEBUG 09:29:59,791 range slices read object3
>>
>>
>> In [38]: cass_test.read_range(conn, "object2", "")
>> Out[38]:
>>
>> [KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272315595271798,
>> name='col_name', value='col_value'), super_column=None)], key='object2'),
>>
>> KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272315595268837,
>> name='col_name', value='col_value'), super_column=None)], key='object1'),
>>
>> KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272315595272693,
>> name='col_name', value='col_value'), super_column=None)], key='object3')]
>>
>> DEBUG 09:34:48,133 range_slice
>> DEBUG 09:34:48,133 RangeSliceCommand{keyspace='Keyspace1',
>> column_family='Standard1', super_column=null,
>> predicate=SlicePredicate(column_names:[[B@7966340c]),
>> range=[28312518014678916505369931620527723964,0], max_keys=1000}
>> DEBUG 09:34:48,133 Adding to restricted ranges
>> [28312518014678916505369931620527723964,0] for
>>
>> (75349581786326521367945210761838448174,75349581786326521367945210761838448174]
>> DEBUG 09:34:48,133 reading RangeSliceCommand{keyspace='Keyspace1',
>> column_family='Standard1', super_column=null,
>> predicate=SlicePredicate(column_names:[[B@7966340c]),
>> range=[28312518014678916505369931620527723964,0], max_keys=1000} from
>> 1810@localhost/127.0.0.1
>> DEBUG 09:34:48,133 Sending RangeSliceReply{rows=Row(key='object2',
>> cf=ColumnFamily(Standard1
>> [636f6c5f6e616d65:false:9@1272315595271798,])),Row(key='object1',
>> cf=ColumnFamily(Standard1
>> [636f6c5f6e616d65:false:9@1272315595268837,])),Row(key='object3',
>> cf=ColumnFamily(Standard1 [636f6c5f6e616d65:false:9@1272315595272693,]))}
>> to 1810@localhost/127.0.0.1
>> DEBUG 09:34:48,133 Processing response on a callback from
>> 1810@localhost/127.0.0.1
>> DEBUG 09:34:48,133 range slices read object2
>> DEBUG 09:34:48,133 range slices read object1
>> DEBUG 09:34:48,133 range slices read object3
>>
>>
>> In [39]: cass_test.read_range(conn, "object3", "")
>> Out[39]:
>>
>> [KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272315595272693,
>> name='col_name', value='col_value'), super_column=None)], key='object3')]
>>
>> DEBUG 09:35:26,090 range_slice
>> DEBUG 09:35:26,090 RangeSliceCommand{keyspace='Keyspace1',
>> column_family='Standard1', super_column=null,
>> predicate=SlicePredicate(column_names:[[B@24e33e18]),
>> range=[123092639156685888118746480803115294277,0], max_keys=1000}
>> DEBUG 09:35:26,090 Adding to restricted ranges
>> [123092639156685888118746480803115294277,0] for
>>
>> (75349581786326521367945210761838448174,75349581786326521367945210761838448174]
>> DEBUG 09:35:26,090 reading RangeSliceCommand{keyspace='Keyspace1',
>> column_family='Standard1', super_column=null,
>> predicate=SlicePredicate(column_names:[[B@24e33e18]),
>> range=[123092639156685888118746480803115294277,0], max_keys=1000} from
>> 1847@localhost/127.0.0.1
>> DEBUG 09:35:26,090 Sending RangeSliceReply{rows=Row(key='object3',
>> cf=ColumnFamily(Standard1 [636f6c5f6e616d65:false:9@1272315595272693,]))}
>> to 1847@localhost/127.0.0.1
>> DEBUG 09:35:26,090 Processing response on a callback from
>> 1847@localhost/127.0.0.1
>> DEBUG 09:35:26,090 range slices read object3
>>
>>
>>
>> thanks
>> Aaron
>>
>>
>>
>>
>> On Sun, 25 Apr 2010 20:23:05 -0700, aaron <aa...@the-mortons.org> wrote:
>>>
>>> I've been looking at the get_range_slices feature and have found some odd
>>> behaviour I do not understand. Basically the keys returned in a range
>>
>> query
>>>
>>> do not match what I would expect to see. I think it may have something to
>>> do with the ordering of keys that I don't know about, but I'm just
>>> guessing.
>>>
>>> On Cassandra v 0.6.1, single node local install; RandomPartitioner. Using
>>> Python and my own thin wrapper around the Thrift Python API.
>>>
>>> Step 1.
>>>
>>> Insert 3 keys into the "Standard 1" column family, called "object 1"
>>> "object 2" and "object 3", each with a single column called 'name' with a
>>> value like 'object1'
>>>
>>> Step 2.
>>>
>>> Do a get_range_slices call in the "Standard 1" CF, for column names
>>> ["name"] with start_key "object1" and end_key "object3". I expect to see
>>> three results, but I only see results for object1 and object2. Below are
>>> the thrift types I'm passing into the Cassandra.Client object...
>>>
>>> - ColumnParent(column_family='Standard1', super_column=None)
>>> - SlicePredicate(column_names=['name'], slice_range=None)
>>> - KeyRange(end_key='object3', start_key='object1', count=4000,
>>> end_token=None, start_token=None)
>>>
>>> and the output
>>>
>>>
>>
>> [KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272250258810439,
>>>
>>> name='name', value='object1'), super_column=None)], key='object1'),
>>>
>>
>> KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272250271620362,
>>>
>>> name='name', value='object3'), super_column=None)], key='object3')]
>>>
>>> Step 3.
>>>
>>> Modify the get_range_slices call, so the start_key is object2. In this
>>
>> case
>>>
>>> I expect to see 2 rows returned, but I get 3. Thrift args and return are
>>> below...
>>>
>>> - ColumnParent(column_family='Standard1', super_column=None)
>>> - SlicePredicate(column_names=['name'], slice_range=None)
>>> - KeyRange(end_key='object3', start_key='object2', count=4000,
>>> end_token=None, start_token=None)
>>>
>>> and the output
>>>
>>>
>>
>> [KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272250265190715,
>>>
>>> name='name', value='object2'), super_column=None)], key='object2'),
>>>
>>
>> KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272250258810439,
>>>
>>> name='name', value='object1'), super_column=None)], key='object1'),
>>>
>>
>> KeySlice(columns=[ColumnOrSuperColumn(column=Column(timestamp=1272250271620362,
>>>
>>> name='name', value='object3'), super_column=None)], key='object3')]
>>>
>>>
>>>
>>> Can anyone explain these odd results? As I said I've got my own python
>>> wrapper around the client, so I may be doing something wrong. But I've
>>> pulled out the thrift objects and they go in and out of the thrift
>>> Cassandra.Client, so I think I'm ok. (I have not noticed a systematic
>>> problem with my wrapper).
>>>
>>> On a more general note, is there information on the sort order of keys
>>
>> when
>>>
>>> using key ranges? I'm guessing the hash of the keys is compared and I
>>> wondering if the hash's of the keys maintain the order of the original
>>> values? Also I assume the order is byte order, rather than ascii or utf8.
>>
>>>
>>> I was experimenting with the difference between column slicing and key
>>> slicing. In my I could write the keys in as column names (they are in
>>> buckets) as well and slice there first, then use the results to to make a
>>> multi key get. I'm trying to support features like, get me all the data
>>> where the key starts with "foo.bar".
>>>
>>> Thanks for the fun project.
>>>
>>> Aaron
>
>



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com

Re: strange get_range_slices behaviour v0.6.1

Posted by aaron morton <aa...@thelastpickle.com>.
He there, I'm still getting odd behavior with get_range_slices. I've  
created a JUNIT test that illustrates the case.
Could someone take a look and either let me know where my  
understanding is wrong or is this is a real issue?


I added the following to ColumnFamilyStoreTest.java


     private ColumnFamilyStore insertKey1Key2Key3() throws  
IOException, ExecutionException, InterruptedException
     {
         List<RowMutation> rms = new LinkedList<RowMutation>();
         RowMutation rm;
         rm = new RowMutation("Keyspace2", "key1".getBytes());
         rm.add(new QueryPath("Standard1", null,  
"Column1".getBytes()), "asdf".getBytes(), 0);
         rms.add(rm);

         rm = new RowMutation("Keyspace2", "key2".getBytes());
         rm.add(new QueryPath("Standard1", null,  
"Column1".getBytes()), "asdf".getBytes(), 0);
         rms.add(rm);

         rm = new RowMutation("Keyspace2", "key3".getBytes());
         rm.add(new QueryPath("Standard1", null,  
"Column1".getBytes()), "asdf".getBytes(), 0);
         rms.add(rm);
         return Util.writeColumnFamily(rms);
     }


     @Test
     public void testThreeKeyRangeAll() throws IOException,  
ExecutionException, InterruptedException
     {
         ColumnFamilyStore cfs = insertKey1Key2Key3();

         IPartitioner p = StorageService.getPartitioner();
         RangeSliceReply result =  
cfs.getRangeSlice(ArrayUtils.EMPTY_BYTE_ARRAY,
                                                    Util.range(p,  
"key1", "key3"),
                                                    10,
                                                    null,
                                                     
Arrays.asList("Column1".getBytes()));
         assertEquals(3, result.rows.size());
     }

     @Test
     public void testThreeKeyRangeSkip1() throws IOException,  
ExecutionException, InterruptedException
     {
         ColumnFamilyStore cfs = insertKey1Key2Key3();

         IPartitioner p = StorageService.getPartitioner();
         RangeSliceReply result =  
cfs.getRangeSlice(ArrayUtils.EMPTY_BYTE_ARRAY,
                                                    Util.range(p,  
"key2", "key3"),
                                                    10,
                                                    null,
                                                     
Arrays.asList("Column1".getBytes()));
         assertEquals(2, result.rows.size());
     }

Running this with "ant test" the partial output is....

     [junit] Testsuite: org.apache.cassandra.db.ColumnFamilyStoreTest
     [junit] Tests run: 7, Failures: 2, Errors: 0, Time elapsed: 1.405  
sec
     [junit]
     [junit] Testcase:  
testThreeKeyRangeAll(org.apache.cassandra.db.ColumnFamilyStoreTest):	 
FAILED
     [junit] expected:<3> but was:<2>
     [junit] junit.framework.AssertionFailedError: expected:<3> but  
was:<2>
     [junit] 	at  
org 
.apache 
.cassandra 
.db 
.ColumnFamilyStoreTest.testThreeKeyRangeAll(ColumnFamilyStoreTest.java: 
170)
     [junit]
     [junit]
     [junit] Testcase:  
testThreeKeyRangeSkip1(org.apache.cassandra.db.ColumnFamilyStoreTest):	 
FAILED
     [junit] expected:<2> but was:<1>
     [junit] junit.framework.AssertionFailedError: expected:<2> but  
was:<1>
     [junit] 	at  
org 
.apache 
.cassandra 
.db 
.ColumnFamilyStoreTest 
.testThreeKeyRangeSkip1(ColumnFamilyStoreTest.java:184)
     [junit]
     [junit]
     [junit] Test org.apache.cassandra.db.ColumnFamilyStoreTest FAILED


Any help appreciated.

Aaron


On 27 Apr 2010, at 09:38, aaron wrote:

>
> I've broken this case down further to some pyton code that works  
> against
> the thrift generated
> client and am still getting the same odd results. With keys obejct1,
> object2 and object3 an
> open ended get_range_slice starting with "object1" only returns  
> object1 and
> 2.
>
> I'm guessing that I've got something wrong or my expectation of how
> get_range_slice works
> is wrong, but I cannot see where I've gone wrong. Any help would be
> appreciated.
>
> They python code to add and read keys is below, assumes a  
> Cassandra.Client
> connection.
>
> import time
> from cassandra import Cassandra,ttypes
> from thrift import Thrift
> from thrift.protocol import TBinaryProtocol
> from thrift.transport import TSocket, TTransport
>
>
> def add_data(conn):
>
>    col_path = ttypes.ColumnPath(column_family="Standard1",
> column="col_name")
>    consistency = ttypes.ConsistencyLevel.QUORUM
>
>    for key in ["object1", "object2", "object3"]:
>        conn.insert("Keyspace1", key, col_path, "col_value",
>            int(time.time() * 1e6), consistency)
>    return
>
> def read_range(conn, start_key, end_key):
>
>    col_parent = ttypes.ColumnParent(column_family="Standard1")
>
>    predicate = ttypes.SlicePredicate(column_names=["col_name"])
>    range = ttypes.KeyRange(start_key=start_key, end_key=end_key,
> count=1000)
>    consistency = ttypes.ConsistencyLevel.QUORUM
>
>    return conn.get_range_slices("Keyspace1", col_parent,
>                predicate, range, consistency)
>
>
> Below is the result of calling read_range with different start  
> values. I've
> also included
> the debug log for each call, the line starting with "reading
> RangeSliceCommand" seems to
> show that key hash for "object2" is greater than "object3".
>
> #expect to return objects 1,2 and 3
>
> In [37]: cass_test.read_range(conn, "object1", "")
> Out[37]:
> [KeySlice 
> (columns 
> =[ColumnOrSuperColumn(column=Column(timestamp=1272315595268837,
> name='col_name', value='col_value'), super_column=None)],  
> key='object1'),
> KeySlice 
> (columns 
> =[ColumnOrSuperColumn(column=Column(timestamp=1272315595272693,
> name='col_name', value='col_value'), super_column=None)],  
> key='object3')]
>
> DEBUG 09:29:59,791 range_slice
> DEBUG 09:29:59,791 RangeSliceCommand{keyspace='Keyspace1',
> column_family='Standard1', super_column=null,
> predicate=SlicePredicate(column_names:[[B@257b40fe]),
> range=[121587881847328893689247922008234581399,0], max_keys=1000}
> DEBUG 09:29:59,791 Adding to restricted ranges
> [121587881847328893689247922008234581399,0] for
> (75349581786326521367945210761838448174,75349581786326521367945210761838448174 
> ]
> DEBUG 09:29:59,791 reading RangeSliceCommand{keyspace='Keyspace1',
> column_family='Standard1', super_column=null,
> predicate=SlicePredicate(column_names:[[B@257b40fe]),
> range=[121587881847328893689247922008234581399,0], max_keys=1000} from
> 1528@localhost/127.0.0.1
> DEBUG 09:29:59,791 Sending RangeSliceReply{rows=Row(key='object1',
> cf=ColumnFamily(Standard1
> [636f6c5f6e616d65:false:9@1272315595268837,])),Row(key='object3',
> cf=ColumnFamily(Standard1 [636f6c5f6e616d65:false: 
> 9@1272315595272693,]))}
> to 1528@localhost/127.0.0.1
> DEBUG 09:29:59,791 Processing response on a callback from
> 1528@localhost/127.0.0.1
> DEBUG 09:29:59,791 range slices read object1
> DEBUG 09:29:59,791 range slices read object3
>
>
> In [38]: cass_test.read_range(conn, "object2", "")
> Out[38]:
> [KeySlice 
> (columns 
> =[ColumnOrSuperColumn(column=Column(timestamp=1272315595271798,
> name='col_name', value='col_value'), super_column=None)],  
> key='object2'),
> KeySlice 
> (columns 
> =[ColumnOrSuperColumn(column=Column(timestamp=1272315595268837,
> name='col_name', value='col_value'), super_column=None)],  
> key='object1'),
> KeySlice 
> (columns 
> =[ColumnOrSuperColumn(column=Column(timestamp=1272315595272693,
> name='col_name', value='col_value'), super_column=None)],  
> key='object3')]
>
> DEBUG 09:34:48,133 range_slice
> DEBUG 09:34:48,133 RangeSliceCommand{keyspace='Keyspace1',
> column_family='Standard1', super_column=null,
> predicate=SlicePredicate(column_names:[[B@7966340c]),
> range=[28312518014678916505369931620527723964,0], max_keys=1000}
> DEBUG 09:34:48,133 Adding to restricted ranges
> [28312518014678916505369931620527723964,0] for
> (75349581786326521367945210761838448174,75349581786326521367945210761838448174 
> ]
> DEBUG 09:34:48,133 reading RangeSliceCommand{keyspace='Keyspace1',
> column_family='Standard1', super_column=null,
> predicate=SlicePredicate(column_names:[[B@7966340c]),
> range=[28312518014678916505369931620527723964,0], max_keys=1000} from
> 1810@localhost/127.0.0.1
> DEBUG 09:34:48,133 Sending RangeSliceReply{rows=Row(key='object2',
> cf=ColumnFamily(Standard1
> [636f6c5f6e616d65:false:9@1272315595271798,])),Row(key='object1',
> cf=ColumnFamily(Standard1
> [636f6c5f6e616d65:false:9@1272315595268837,])),Row(key='object3',
> cf=ColumnFamily(Standard1 [636f6c5f6e616d65:false: 
> 9@1272315595272693,]))}
> to 1810@localhost/127.0.0.1
> DEBUG 09:34:48,133 Processing response on a callback from
> 1810@localhost/127.0.0.1
> DEBUG 09:34:48,133 range slices read object2
> DEBUG 09:34:48,133 range slices read object1
> DEBUG 09:34:48,133 range slices read object3
>
>
> In [39]: cass_test.read_range(conn, "object3", "")
> Out[39]:
> [KeySlice 
> (columns 
> =[ColumnOrSuperColumn(column=Column(timestamp=1272315595272693,
> name='col_name', value='col_value'), super_column=None)],  
> key='object3')]
>
> DEBUG 09:35:26,090 range_slice
> DEBUG 09:35:26,090 RangeSliceCommand{keyspace='Keyspace1',
> column_family='Standard1', super_column=null,
> predicate=SlicePredicate(column_names:[[B@24e33e18]),
> range=[123092639156685888118746480803115294277,0], max_keys=1000}
> DEBUG 09:35:26,090 Adding to restricted ranges
> [123092639156685888118746480803115294277,0] for
> (75349581786326521367945210761838448174,75349581786326521367945210761838448174 
> ]
> DEBUG 09:35:26,090 reading RangeSliceCommand{keyspace='Keyspace1',
> column_family='Standard1', super_column=null,
> predicate=SlicePredicate(column_names:[[B@24e33e18]),
> range=[123092639156685888118746480803115294277,0], max_keys=1000} from
> 1847@localhost/127.0.0.1
> DEBUG 09:35:26,090 Sending RangeSliceReply{rows=Row(key='object3',
> cf=ColumnFamily(Standard1 [636f6c5f6e616d65:false: 
> 9@1272315595272693,]))}
> to 1847@localhost/127.0.0.1
> DEBUG 09:35:26,090 Processing response on a callback from
> 1847@localhost/127.0.0.1
> DEBUG 09:35:26,090 range slices read object3
>
>
>
> thanks
> Aaron
>
>
>
>
> On Sun, 25 Apr 2010 20:23:05 -0700, aaron <aa...@the-mortons.org>  
> wrote:
>> I've been looking at the get_range_slices feature and have found  
>> some odd
>> behaviour I do not understand. Basically the keys returned in a range
> query
>> do not match what I would expect to see. I think it may have  
>> something to
>> do with the ordering of keys that I don't know about, but I'm just
>> guessing.
>>
>> On Cassandra v 0.6.1, single node local install; RandomPartitioner.  
>> Using
>> Python and my own thin wrapper around the Thrift Python API.
>>
>> Step 1.
>>
>> Insert 3 keys into the "Standard 1" column family, called "object 1"
>> "object 2" and "object 3", each with a single column called 'name'  
>> with a
>> value like 'object1'
>>
>> Step 2.
>>
>> Do a get_range_slices call in the "Standard 1" CF, for column names
>> ["name"] with start_key "object1" and end_key "object3". I expect  
>> to see
>> three results, but I only see results for object1 and object2.  
>> Below are
>> the thrift types I'm passing into the Cassandra.Client object...
>>
>> - ColumnParent(column_family='Standard1', super_column=None)
>> - SlicePredicate(column_names=['name'], slice_range=None)
>> - KeyRange(end_key='object3', start_key='object1', count=4000,
>> end_token=None, start_token=None)
>>
>> and the output
>>
>>
> [KeySlice 
> (columns 
> =[ColumnOrSuperColumn(column=Column(timestamp=1272250258810439,
>> name='name', value='object1'), super_column=None)], key='object1'),
>>
> KeySlice 
> (columns 
> =[ColumnOrSuperColumn(column=Column(timestamp=1272250271620362,
>> name='name', value='object3'), super_column=None)], key='object3')]
>>
>> Step 3.
>>
>> Modify the get_range_slices call, so the start_key is object2. In  
>> this
> case
>> I expect to see 2 rows returned, but I get 3. Thrift args and  
>> return are
>> below...
>>
>> - ColumnParent(column_family='Standard1', super_column=None)
>> - SlicePredicate(column_names=['name'], slice_range=None)
>> - KeyRange(end_key='object3', start_key='object2', count=4000,
>> end_token=None, start_token=None)
>>
>> and the output
>>
>>
> [KeySlice 
> (columns 
> =[ColumnOrSuperColumn(column=Column(timestamp=1272250265190715,
>> name='name', value='object2'), super_column=None)], key='object2'),
>>
> KeySlice 
> (columns 
> =[ColumnOrSuperColumn(column=Column(timestamp=1272250258810439,
>> name='name', value='object1'), super_column=None)], key='object1'),
>>
> KeySlice 
> (columns 
> =[ColumnOrSuperColumn(column=Column(timestamp=1272250271620362,
>> name='name', value='object3'), super_column=None)], key='object3')]
>>
>>
>>
>> Can anyone explain these odd results? As I said I've got my own  
>> python
>> wrapper around the client, so I may be doing something wrong. But  
>> I've
>> pulled out the thrift objects and they go in and out of the thrift
>> Cassandra.Client, so I think I'm ok. (I have not noticed a systematic
>> problem with my wrapper).
>>
>> On a more general note, is there information on the sort order of  
>> keys
> when
>> using key ranges? I'm guessing the hash of the keys is compared and I
>> wondering if the hash's of the keys maintain the order of the  
>> original
>> values? Also I assume the order is byte order, rather than ascii or  
>> utf8.
>
>>
>> I was experimenting with the difference between column slicing and  
>> key
>> slicing. In my I could write the keys in as column names (they are in
>> buckets) as well and slice there first, then use the results to to  
>> make a
>> multi key get. I'm trying to support features like, get me all the  
>> data
>> where the key starts with "foo.bar".
>>
>> Thanks for the fun project.
>>
>> Aaron


Re: Splitting queries, or using two different parsers

Posted by Erick Erickson <er...@gmail.com>.
Been there, done that <G>. If you only knew the number of times someone on
this list has come to my rescue by saying "Did you look at *****"?

But I thought I'd add that you can use the PerFieldAnalyzerWrapper at index
time too, which may help you keep things consistent between indexing time
and searching time....

Best
Erick

On 1/30/07, Aleksander M. Stensby <al...@integrasco.no> wrote:
>
> kk, i excuse myself for being so ignorant and not looking through the API
> thorougly:)
> I found the PerFieldAnalyzerWrapper which i think will do the trick:)
>
> So erhm.. just ignore this message:)
>
> - Aleksander
>
> On Tue, 30 Jan 2007 09:39:07 +0100, Aleksander M. Stensby
> <al...@integrasco.no> wrote:
>
> > Hey everyone! I have a question/problem I hope some of you guys can help
> > me with.
> >
> > I have this case where i have put my self in a bit of trouble... The
> > thing is i have several fields indexed, one being "source" and one being
> > "content" (which is the default field), among other fields that are not
> > really that important.
> >
> > The thing is, the content field and most of the other fields are parsed
> > and tokenized using a StandardAnalyzer with English stop words. So, this
> > field (and most of the others) are lowercased when indexed, and thus
> > search on these fields should be performed in lower case (or the best
> > thing would of course be to use the same analyzer.)
> >
> > The problem occurs when we examine the source field, because here, the
> > field is case-sensitive, and thus search on this field need to be kept
> > case-sensitive aswell, and it should not me tokenized on characters such
> > as "-.," etc.
> >
> > Now. How do i solve this?
> >
> > I want the user to be able to search using the regular lucene Query
> > Syntax. I.e., an input query like:
> > "my Search ExPresssiON goes here" AND source:(Some-Source)
> > I tried one thing, and that was to split the input query-string, parse
> > the occurance of "source:", and then use two different parsers on the
> > two parts, then combining them into a new query. But the problem is that
> > this involves many different scenarios, and also parsing failure if i
> > parse it into:
> > "my Search ExPressiON goes here" AND
> > source:(Some-Source)
> > The first will fail since it is an open boolean query missing the last
> > part.
> > Also, there is the problem with different writing styles, where
> > sometimes the query can be complex like
> > <arg1> AND (source:Some-Source OR source:Some-Source OR ...) AND <arg2>
> > OR <arg3> ...
> > etc..
> >
> > It really gives me an head-ache. Any ideas how i could solve this
> > problem in the best manner?
> > All answers are highly appreciated!
> >
> > - Aleksander
> >
>
>
>
> --
> Aleksander M. Stensby
> Software Developer
> Integrasco A/S
> aleksander.stensby@integrasco.no
> Tlf.: +47 41 22 82 72
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: java-user-unsubscribe@lucene.apache.org
> For additional commands, e-mail: java-user-help@lucene.apache.org
>
>

Re: accurev provider

Posted by Grant Gardner <gd...@optusnet.com.au>.

I faxed in the CLA today. Also uploaded a new version to the JIRA that
fixes a few bugs under Windows.

Grant.


On Wed, 1 Apr 2009 01:56:19 +1100, Brett Porter <br...@apache.org> wrote:
> Do the authors of the original provider, or other users here, have an
> opinion on this? I like that it is more complete (functionally, and in
> docs and tests), however am not an accurev user myself.
> 
> What effect does the URL syntax change have?
> 
> Also Grant, I haven't seen the CLA yet?
> 
> - Brett
> 
> On 17/03/2009, at 11:58 PM, Grant Gardner wrote:
> 
>>
>>
>> I've created the jira with attached code. Let me know if there's
>> anything
>> else I need to do.
>> http://jira.codehaus.org/browse/SCM-445
>>
>> Will do the CLA when I find myself conveniently located near a fax or
>> scanner.
>>


Re: accurev provider

Posted by Brett Porter <br...@apache.org>.
Do the authors of the original provider, or other users here, have an  
opinion on this? I like that it is more complete (functionally, and in  
docs and tests), however am not an accurev user myself.

What effect does the URL syntax change have?

Also Grant, I haven't seen the CLA yet?

- Brett

On 17/03/2009, at 11:58 PM, Grant Gardner wrote:

>
>
> I've created the jira with attached code. Let me know if there's  
> anything
> else I need to do.
> http://jira.codehaus.org/browse/SCM-445
>
> Will do the CLA when I find myself conveniently located near a fax or
> scanner.
>
> On Mon, 9 Mar 2009 05:02:15 +1100, Brett Porter <br...@apache.org>  
> wrote:
>> 1) Create a JIRA ticket under SCM
>> 2) you might like to submit a CLA - you can do it regardless, and if
>> the work is accepted such a large chunk would probably require it  
>> (see
>> http://www.apache.org/licenses/
>>  and specifically http://www.apache.org/licenses/icla.txt)
>>
>> Thanks!
>>
>> - Brett
>>
>> On 07/03/2009, at 12:50 PM, Grant Gardner wrote:
>>
>>>
>>>
>>> Hi, I've got my AccuRev provider to a point where both the release
>>> plugin
>>> and Continuum are working (for me).
>>>
>>> Includes unit tests and tck tests for all the implemented commands.
>>>
>>> How do I go about contributing the code?.
>>>
>>> Grant.
>>>
>>>
>>
>> --
>> Brett Porter
>> brett@apache.org
>> http://blogs.exist.com/bporter/


Re: accurev provider

Posted by Grant Gardner <gd...@optusnet.com.au>.

I've created the jira with attached code. Let me know if there's anything
else I need to do.
http://jira.codehaus.org/browse/SCM-445

Will do the CLA when I find myself conveniently located near a fax or
scanner.

On Mon, 9 Mar 2009 05:02:15 +1100, Brett Porter <br...@apache.org> wrote:
> 1) Create a JIRA ticket under SCM
> 2) you might like to submit a CLA - you can do it regardless, and if
> the work is accepted such a large chunk would probably require it (see
> http://www.apache.org/licenses/
>   and specifically http://www.apache.org/licenses/icla.txt)
> 
> Thanks!
> 
> - Brett
> 
> On 07/03/2009, at 12:50 PM, Grant Gardner wrote:
> 
>>
>>
>> Hi, I've got my AccuRev provider to a point where both the release
>> plugin
>> and Continuum are working (for me).
>>
>> Includes unit tests and tck tests for all the implemented commands.
>>
>> How do I go about contributing the code?.
>>
>> Grant.
>>
>>
> 
> --
> Brett Porter
> brett@apache.org
> http://blogs.exist.com/bporter/

Re: accurev provider

Posted by Brett Porter <br...@apache.org>.
1) Create a JIRA ticket under SCM
2) you might like to submit a CLA - you can do it regardless, and if  
the work is accepted such a large chunk would probably require it (see http://www.apache.org/licenses/ 
  and specifically http://www.apache.org/licenses/icla.txt)

Thanks!

- Brett

On 07/03/2009, at 12:50 PM, Grant Gardner wrote:

>
>
> Hi, I've got my AccuRev provider to a point where both the release  
> plugin
> and Continuum are working (for me).
>
> Includes unit tests and tck tests for all the implemented commands.
>
> How do I go about contributing the code?.
>
> Grant.
>
>

--
Brett Porter
brett@apache.org
http://blogs.exist.com/bporter/


Re: Problem to parse a PDF document

Posted by Dave Smith <da...@candata.com>.
Fine, but because of 1067 it will  not render ...

Dave Smith
Candata Ltd.
416-493-9020x2413
Direct: 416-855-2413



On Wed, Jun 13, 2012 at 10:27 AM, Timo Boehme <ti...@ontochem.com>wrote:

> Hi,
>
> Am 13.06.2012 14:29, schrieb Dave Smith:
>
>> Bug
>> https://issues.apache.org/**jira/browse/PDFBOX-1067<https://issues.apache.org/jira/browse/PDFBOX-1067>
>>
>
> as I see it this bug has nothing to do with PDFBOX-1067 but relates to
> PDFBOX-1099. The PDF in question was changed and we have 2 XREF tables and
> 2 object streams. The pages object (objnr 2) is in both streams (first with
> 1 page, second with 2 pages) and first stream is parsed first, second after
> it and existing objects are skipped which is wrong in this case. For a
> correct handling XREF information must be used.
>
> However there is a workaround: use NonSequentialPDFParser. Load your
> document with PDDocument.loadNonSeq() and you are fine.
>
>
> Best regards,
> Timo
>
>  On Wed, Jun 13, 2012 at 8:02 AM,<pi...@huttin.com>  wrote:
>>
>>  Sorry,
>>>
>>> apparently the pdf was not correctly attached to the previous mail, I
>>> just zip it and re-attach it.
>>>
>>> Pierre Huttin
>>>
>>> On Wed, 13 Jun 2012 13:56:50 +0200,<pi...@huttin.com>  wrote:
>>>
>>>> Hello,
>>>>
>>>> I have some trouble with documents the library is not not able to
>>>> retreive the number of pages and load them into the list using
>>>> PDDocument.getDocumentCatalog(**).getAllPages() method.
>>>>
>>>> The pdf file and the java code to retreive the number of pages are
>>>> attached to this mail. apparently it's look like the PDFParser do not
>>>> read correctly the /Pages object the ref of pages are "8 0" and "19
>>>> 0".
>>>>
>>>> I open the document correctly with adobe reader and itextrups, both
>>>> retrieve the correct number of pages : 2.
>>>>
>>>> I try to run my code using the version 1.7.0 of PDFBox
>>>>
>>>> Thanks in advance for your help.
>>>>
>>>> Best regards
>>>>
>>>> Pierre Huttin
>>>>
>>>
>>>
>>
>
> --
>
>  Timo Boehme
>  OntoChem GmbH
>  H.-Damerow-Str. 4
>  06120 Halle/Saale
>  T: +49 345 4780474
>  F: +49 345 4780471
>  timo.boehme@ontochem.com
>
> ______________________________**______________________________**_________
>
>  OntoChem GmbH
>  Geschäftsführer: Dr. Lutz Weber
>  Sitz: Halle / Saale
>  Registergericht: Stendal
>  Registernummer: HRB 215461
> ______________________________**______________________________**_________
>
>

Re: Problem to parse a PDF document

Posted by Timo Boehme <ti...@ontochem.com>.
Hi,

Am 13.06.2012 14:29, schrieb Dave Smith:
> Bug
> https://issues.apache.org/jira/browse/PDFBOX-1067

as I see it this bug has nothing to do with PDFBOX-1067 but relates to 
PDFBOX-1099. The PDF in question was changed and we have 2 XREF tables 
and 2 object streams. The pages object (objnr 2) is in both streams 
(first with 1 page, second with 2 pages) and first stream is parsed 
first, second after it and existing objects are skipped which is wrong 
in this case. For a correct handling XREF information must be used.

However there is a workaround: use NonSequentialPDFParser. Load your 
document with PDDocument.loadNonSeq() and you are fine.


Best regards,
Timo

> On Wed, Jun 13, 2012 at 8:02 AM,<pi...@huttin.com>  wrote:
>
>> Sorry,
>>
>> apparently the pdf was not correctly attached to the previous mail, I
>> just zip it and re-attach it.
>>
>> Pierre Huttin
>>
>> On Wed, 13 Jun 2012 13:56:50 +0200,<pi...@huttin.com>  wrote:
>>> Hello,
>>>
>>> I have some trouble with documents the library is not not able to
>>> retreive the number of pages and load them into the list using
>>> PDDocument.getDocumentCatalog().getAllPages() method.
>>>
>>> The pdf file and the java code to retreive the number of pages are
>>> attached to this mail. apparently it's look like the PDFParser do not
>>> read correctly the /Pages object the ref of pages are "8 0" and "19
>>> 0".
>>>
>>> I open the document correctly with adobe reader and itextrups, both
>>> retrieve the correct number of pages : 2.
>>>
>>> I try to run my code using the version 1.7.0 of PDFBox
>>>
>>> Thanks in advance for your help.
>>>
>>> Best regards
>>>
>>> Pierre Huttin
>>
>


-- 

  Timo Boehme
  OntoChem GmbH
  H.-Damerow-Str. 4
  06120 Halle/Saale
  T: +49 345 4780474
  F: +49 345 4780471
  timo.boehme@ontochem.com

_____________________________________________________________________

  OntoChem GmbH
  Geschäftsführer: Dr. Lutz Weber
  Sitz: Halle / Saale
  Registergericht: Stendal
  Registernummer: HRB 215461
_____________________________________________________________________


Re: Problem to parse a PDF document

Posted by Dave Smith <da...@candata.com>.
Bug

https://issues.apache.org/jira/browse/PDFBOX-1067

Dave Smith
Candata Ltd.
416-493-9020x2413
Direct: 416-855-2413



On Wed, Jun 13, 2012 at 8:02 AM, <pi...@huttin.com> wrote:

> Sorry,
>
> apparently the pdf was not correctly attached to the previous mail, I
> just zip it and re-attach it.
>
> Pierre Huttin
>
> On Wed, 13 Jun 2012 13:56:50 +0200, <pi...@huttin.com> wrote:
> > Hello,
> >
> > I have some trouble with documents the library is not not able to
> > retreive the number of pages and load them into the list using
> > PDDocument.getDocumentCatalog().getAllPages() method.
> >
> > The pdf file and the java code to retreive the number of pages are
> > attached to this mail. apparently it's look like the PDFParser do not
> > read correctly the /Pages object the ref of pages are "8 0" and "19
> > 0".
> >
> > I open the document correctly with adobe reader and itextrups, both
> > retrieve the correct number of pages : 2.
> >
> > I try to run my code using the version 1.7.0 of PDFBox
> >
> > Thanks in advance for your help.
> >
> > Best regards
> >
> > Pierre Huttin
>

Re: Once again, Repetitive loss of connection with JDBC request and MySQL

Posted by serge van Thiel <se...@skynet.be>.
Well as I said, I first wanted to have a robust scenario to start from and to 
further elaborate it. 
So, in order to give you feedback, I made 2 tests : the first with a pool of 1 
connection and a reuse factor of 1. The second is with a pool of 10 and a 
reuse of 1. Those values have been defined in a "Database Connection Pool 
defaults" entry.
I guess it does not represent something "real" in terms of database access but 
it does not crash anymore, which is the primary condition to enable further 
investigation.
Cheers,

Serge
On Sunday 10 August 2003 04:54 pm, mstover1@apache.org wrote:
> That's interesting - I'm curious, what numbers did you choose for number of
> connections in the pool and number of re-uses allowed?
>
> -Mike
>
> On 10 Aug 2003 at 16:45, serge van Thiel wrote:
> > Hi Mike and Jeremy,
> >
> > Thank you very much for your replies and sorry for being late in my
> > reaction. Here are the results of my tests :
> > Rel1.9 standard keeps crashing for the same reason on my very simple one
> > thread SELECT.
> > Rel1.9 with the 2 new jars from Jeremy does not crash at all and is, in
> > that oversimplified test, even 50 times faster than any previous result I
> > have been recording.
> > The logfile (jmeter.log) does not show any abnormal behaviour as far as I
>
> can
>
> > see. It even includes much more details.
> > Given the few and non exhaustive tests I have made, It seems that you
>
> made a
>
> > great job.
> >
> > Thanks again,
> >
> > Serge van Thiel
> > Ph.:+32 2 3752277
> > Mob.:+32 477 414543
> > email: serge.vanthiel@skynet.be
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: jmeter-user-help@jakarta.apache.org

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-user-help@jakarta.apache.org


Re: Re: Cannot load JDBC driver class

Posted by Rick Fincher <rn...@tbird.com>.
Hi Shyly,

It looks like you have everything right.

You are not missing an environment variable, assuming you meant
%CATALINA_HOME% and not %TOMCAT% or %CATALINA_HOM% below.

Do you have the context entry in server.xml inside <host>?

Also do you have the <resource-ref> in the right place in the web.xml file?
Those entries have to be in the right order.

It has to be after </error-page> and before <security-constraint>.

Can you post (or send directly) you entire server.xml and web.xml files
after sanitizing them?

Rick

----- Original Message -----

> Thank you for all the responses, but it's still not working.  There is
only one jconn2.jar file, and it contains
com/sybase/jdbc2/jdbc/SybDriver.class

com.sybase.jdbc2.jdbc.SybDriver

>
> Out of curiousity, I tried replacing the driver class with another class I
know is in %TOMCAT%\common\lib and got the same error message.  Am I missing
an environment variable?  Running %CATALINA_HOM%\bin\startup produces:
> Using CATALINA_BASE:   c:\local\tomcat\jakarta-tomcat-4.1.24
> Using CATALINA_HOME:   c:\local\tomcat\jakarta-tomcat-4.1.24
> Using CATALINA_TMPDIR: c:\local\tomcat\jakarta-tomcat-4.1.24\temp
> Using JAVA_HOME:       c:\local\jre\
>
> Not quite bald yet,
> Shyly
>
> At 12:04 PM 4/17/2003 -0400, you wrote:
>
> >OK, I think I see the problem now. Try removing jconn2.jar from
> >%CATALINA_HOME%\shared\lib.  That directory is for jars shared by web
apps
> >only not the Tomcat server program itself.
> >
> >It may be loading that first and not reloading it from common where
Tomcat
> >can get at it.
> >
> >Rick
> >----- Original Message -----
> >
> >
> >> Hello.
> >>
> >> I've seen a lot of emails on other forums where people are having the
same
> >problem as what I describe below.  Would anyone know why we keep getting
a
> >"Cannot load JDBC driver class 'null'" message?  We have Tomcat running
on
> >Unix and NT (installed by different people), and both give the same
error.
> >(Or put another way, is there something special I should do to get Tomcat
to
> >recognize jconn2.jar and other jar files in %CATALINA_HOME%\common\lib?)
> >>
> >> Please help - I'm running out of hairs to pull.
> >>
> >> Thank you,
> >> Shyly
> >>
> >>
> >> >>Date: Wed, 16 Apr 2003 15:00:13 -0400
> >> >>To: "Tomcat Users List" <to...@jakarta.apache.org>
> >> >>From: Shyly Amarasinghe <am...@dpw.com>
> >> >>Subject: Re: Cannot load JDBC driver class
> >> >>
> >> >>Wow, Thanks for the quick reply!  I've got the debugging all the way
up,
> >and when I click on datasources under the webapp's name in the tomcat
> >administrator, I get the following:
> >> >>org.apache.jasper.JasperException: Exception retrieving attribute
> >'driverClassName'
> >> >>
> >> >>I think the url is correct since it works in the second example below
> >(not using the datasource).  For some reason tomcat can't find jconn2.jar
> >via the datasource.  Is there something else I need to configure in
> >server.xml or web.xml other than what's below?
> >> >>
> >> >>Thank you again,
> >> >>Shyly
> >> >>At 02:45 PM 4/16/2003 -0400, you wrote:
> >> >>
> >> >>>Hi,
> >> >>>
> >> >>>What is your database URL?  I got that error when it could not open
my
> >> >>>database.  It has to be of the form "jdbc:(etc.)"  You can set your
> >debug
> >> >>>level higher like in the HOWTO to get a better data dump in the log
> >file to
> >> >>>see what's going on.  Be sure to enable the creation of a
> >> >>>logs/localhost_db_log file.
> >> >>>
> >> >>>Rick
> >> >>>
> >> >>>----- Original Message -----
> >> >>>
> >> >>>> Hello.
> >> >>>>
> >> >>>> I am trying to use datasources with Tomcat but get an error
message
> >> >>>"java.sql.SQLException: Cannot load JDBC driver class 'null' "
> >> >>>>
> >> >>>> server.xml contains the following:
> >> >>>>       <parameter>
> >> >>>>         <name>driverClassName</name>
> >> >>>>         <value>com.sybase.jdbc2.jdbc.SybDriver</value>
> >> >>>>       </parameter>
> >> >>>>
> >> >>>> the webapp's web.xml contains:
> >> >>>>   <resource-ref>
> >> >>>>       <description>DB Connection</description>
> >> >>>>       <res-ref-name>jdbc/mydb</res-ref-name>
> >> >>>>       <res-type>javax.sql.DataSource</res-type>
> >> >>>>       <res-auth>Container</res-auth>
> >> >>>>   </resource-ref>
> >> >>>>
> >> >>>> and the jsp page contains
> >> >>>> DataSource ds = (DataSource)ctx.lookup("java:comp/env/jdbc/mydb");
> >> >>>>      con = ds.getConnection();  //this is where it fails
> >> >>>>
> >> >>>> When I check if ds == null, it's not null, but at the con =
> >> >>>ds.getConnection() I get the error message.
> >> >>>>
> >> >>>> The driver jar is in %CATALINA_HOME%\common\lib and
> >> >>>%CATALINA_HOME%\shared\lib, and I've set the classpath to include
> >> >>>jconn2.jar, and the following code works, so I think I've got
> >everything set
> >> >>>up properly.
> >> >>>> Class.forName("com.sybase.jdbc2.jdbc.SybDriver").newInstance();
> >> >>>> String url = "jdbc:sybase:xxxx:5000";
> >> >>>> Connection con1 = DriverManager.getConnection(url, "xxx", "xxx");
> >> >>>>
> >> >>>> Can you see why my datasources aren't working?
> >> >>>>
> >> >>>> Thanks very much for any help you can offer,
> >> >>>> Shyly
> >> >>>>
> >> >>>>
> >>
>>>> ---------------------------------------------------------------------
> >> >>>> To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
> >> >>>> For additional commands, e-mail:
tomcat-user-help@jakarta.apache.org
> >> >>>>
> >> >>>
> >> >>>
> >>
>>>---------------------------------------------------------------------
> >> >>>To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
> >> >>>For additional commands, e-mail: tomcat-user-help@jakarta.apache.org
> >> >
> >> >
> >> >---------------------------------------------------------------------
> >> >To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
> >> >For additional commands, e-mail: tomcat-user-help@jakarta.apache.org
> >>
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
> >> For additional commands, e-mail: tomcat-user-help@jakarta.apache.org
> >>
> >
> >
> >---------------------------------------------------------------------
> >To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
> >For additional commands, e-mail: tomcat-user-help@jakarta.apache.org
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tomcat-user-help@jakarta.apache.org
>


---------------------------------------------------------------------
To unsubscribe, e-mail: tomcat-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tomcat-user-help@jakarta.apache.org


Re: using reactor?

Posted by Jason van Zyl <ja...@zenplex.com>.
On Tue, 2002-12-10 at 10:58, Brian Ewins wrote:
> Siegfried Göschl wrote:
> > What are the current ideas of refactoring reactor?! Would it make 
> > sense to create a MAVEN process for each depend subprojects and use 
> > the return value (TBD as far as I know) as indicator of a sucessful 
> > build?!
> Re the return value, you could vote for a fix:
> http://jira.werken.com/secure/ViewIssue.jspa?key=MAVEN-109
> http://jira.werken.com/secure/ViewIssue.jspa?key=FOREHEAD-3

I will incorporate your idea into classworlds. I'm almost ready to make
the switch over to classworlds and I make sure it returns correctly
before swapping out forehead.

BTW, the forehead was named such as due to the banging of the forehead
against the wall when dealing with classloaders. A little Forehead/Maven
trivia.

> 
> 
> --
> To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
> For additional commands, e-mail: <ma...@jakarta.apache.org>
-- 
jvz.

Jason van Zyl
jason@zenplex.com
http://tambora.zenplex.org

In short, man creates for himself a new religion of a rational
and technical order to justify his work and to be justified in it.
  
  -- Jacques Ellul, The Technological Society


Re: using reactor?

Posted by Siegfried Göschl <si...@it20one.at>.
Thanks a lot - this patch allows a clean MAVEN integration for my 
existing ANT build!!

Siegfried Goeschl
CTO
=================================
IT20one GmbH
mail: siegfried.goeschl@it20one.at
www.it20one.at

On 10 Dec 2002 at 15:58, Brian Ewins wrote:

> Siegfried Göschl wrote:
> > What are the current ideas of refactoring reactor?! Would it make
> > sense to create a MAVEN process for each depend subprojects and use
> > the return value (TBD as far as I know) as indicator of a sucessful
> > build?!
> Re the return value, you could vote for a fix:
> http://jira.werken.com/secure/ViewIssue.jspa?key=MAVEN-109
> http://jira.werken.com/secure/ViewIssue.jspa?key=FOREHEAD-3
> 
> I suspect that the forehead issue (which causes the maven one) isn't
> being fixed because I hear forehead is being abandoned in favour of
> classworlds (not released yet). However if you want a return value
> from maven now, its pretty easy to fix forehead for yourself along the
> lines of my bug report (actually I just changed forehead's main() to
> not trap any exceptions). I use this to get maven status from within
> anthill.
> 
> -Baz
> 
> 
> --
> To unsubscribe, e-mail:  
> <ma...@jakarta.apache.org> For
> additional commands, e-mail:
> <ma...@jakarta.apache.org>
> 


Re: using reactor?

Posted by Brian Ewins <Br...@btinternet.com>.
Siegfried Göschl wrote:
> What are the current ideas of refactoring reactor?! Would it make 
> sense to create a MAVEN process for each depend subprojects and use 
> the return value (TBD as far as I know) as indicator of a sucessful 
> build?!
Re the return value, you could vote for a fix:
http://jira.werken.com/secure/ViewIssue.jspa?key=MAVEN-109
http://jira.werken.com/secure/ViewIssue.jspa?key=FOREHEAD-3

I suspect that the forehead issue (which causes the maven one) isn't 
being fixed because I hear forehead is being abandoned in favour of 
classworlds (not released yet). However if you want a return value from 
maven now, its pretty easy to fix forehead for yourself along the lines 
of my bug report (actually I just changed forehead's main() to not trap 
any exceptions). I use this to get maven status from within anthill.

-Baz


RE: Help with SUM function

Posted by Jim Bury <jb...@SciQuest.com>.
Could I do something with cell names? I haven't been able to get it to keep the cell names when I generate the spreadsheet... Is there a trick or is it not doable?

Jim

-----Original Message-----
From: Michael Zalewski [mailto:zalewski@optonline.net] 
Sent: Tuesday, August 31, 2010 3:13 PM
To: user@poi.apache.org
Subject: Re: Help with SUM function

Sounds like you are having a column with Subtotals and Grand Totals. The SUM
function that yields your grand total does not need to pick out ranges. Just run
the SUM function over the entire column

For example
   A             B
 1 Supplier #1   1.00 
 2               2.00
 3 Subtotal      @SUM(B1:B2)
 4 Supplier #2   3.00
 5               4.00
 6 Subtotal      @SUM(B4:B5)
 7 GRAND TOTAL   @SUM(B1:B5)

You would think that the GRAND TOTAL would be double the correct result, because
it looks like the formula includes the subtotals at B3 and B6. But such is not
the case. The SUM function will ignore cells which contain subtotals from cells
already included in the SUM.

I'm not sure that the POI Formula Evaluator behaves this way. But Excel does.




---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@poi.apache.org
For additional commands, e-mail: user-help@poi.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@poi.apache.org
For additional commands, e-mail: user-help@poi.apache.org


Re: Problem to parse a PDF document

Posted by pi...@huttin.com.
Many thanks, I have attached the file to the issue.

Now it work fine for this kind of documents, but I have a side effect
on other documents, who works fine in the past.

I receive the following error message.

Caused by: java.io.IOException: Error: Expected an integer type,
actual='xref'
	at
org.apache.pdfbox.pdfparser.BaseParser.readInt(BaseParser.java:1541)
	at
org.apache.pdfbox.pdfparser.NonSequentialPDFParser.parseXrefObjStream(NonSequentialPDFParser.java:354)
	at
org.apache.pdfbox.pdfparser.NonSequentialPDFParser.initialParse(NonSequentialPDFParser.java:266)
	at
org.apache.pdfbox.pdfparser.NonSequentialPDFParser.parse(NonSequentialPDFParser.java:574)
	at
org.apache.pdfbox.pdmodel.PDDocument.loadNonSeq(PDDocument.java:1124)
	at
org.apache.pdfbox.pdmodel.PDDocument.loadNonSeq(PDDocument.java:1107)

If I use the PDDocument.load() method I receive this warning message :

14 juin 2012 09:58:30 org.apache.pdfbox.pdfparser.XrefTrailerResolver
setStartxref
ATTENTION: Did not found XRef object at specified startxref position
173

but the document is correctly loaded by PDFBox.

I have a problemn for the sample file, because it contains some
confidential datas in it.

Best regards

Pierre Huttin



On Thu, 14 Jun 2012 00:23:49 +0200, Timo Boehme
<ti...@ontochem.com> wrote:
> Am 13.06.2012 14:02, schrieb pierre@huttin.com:
>> Sorry,
>>
>> apparently the pdf was not correctly attached to the previous mail, I
>> just zip it and re-attach it.
>>
>> Pierre Huttin
> 
> With resolving PDFBOX-1099
> (https://issues.apache.org/jira/browse/PDFBOX-1099) the page count is
> correct with both parsers (NonSequentialPDFParser and PDFParser).
> 
> For testing purposes it would be helpful to have your example PDF
> associated with PDFBOX-1099. Could you upload it to this issue (and
> tick the 'Grant license to ASF for inclusion in ASF works (as per the
> Apache License §5)' or give permission to do so with your file
> attached to previous email with license grant?
> 
> 
> Best regards,
> Timo
> 
>>
>> On Wed, 13 Jun 2012 13:56:50 +0200,<pi...@huttin.com>  wrote:
>>> Hello,
>>>
>>> I have some trouble with documents the library is not not able to
>>> retreive the number of pages and load them into the list using
>>> PDDocument.getDocumentCatalog().getAllPages() method.
>>>
>>> The pdf file and the java code to retreive the number of pages are
>>> attached to this mail. apparently it's look like the PDFParser do not
>>> read correctly the /Pages object the ref of pages are "8 0" and "19
>>> 0".
>>>
>>> I open the document correctly with adobe reader and itextrups, both
>>> retrieve the correct number of pages : 2.
>>>
>>> I try to run my code using the version 1.7.0 of PDFBox
>>>
>>> Thanks in advance for your help.
>>>
>>> Best regards
>>>
>>> Pierre Huttin


Re: Problem to parse a PDF document

Posted by Timo Boehme <ti...@ontochem.com>.
Am 13.06.2012 14:02, schrieb pierre@huttin.com:
> Sorry,
>
> apparently the pdf was not correctly attached to the previous mail, I
> just zip it and re-attach it.
>
> Pierre Huttin

With resolving PDFBOX-1099 
(https://issues.apache.org/jira/browse/PDFBOX-1099) the page count is 
correct with both parsers (NonSequentialPDFParser and PDFParser).

For testing purposes it would be helpful to have your example PDF 
associated with PDFBOX-1099. Could you upload it to this issue (and
tick the 'Grant license to ASF for inclusion in ASF works (as per the 
Apache License §5)' or give permission to do so with your file attached 
to previous email with license grant?


Best regards,
Timo

>
> On Wed, 13 Jun 2012 13:56:50 +0200,<pi...@huttin.com>  wrote:
>> Hello,
>>
>> I have some trouble with documents the library is not not able to
>> retreive the number of pages and load them into the list using
>> PDDocument.getDocumentCatalog().getAllPages() method.
>>
>> The pdf file and the java code to retreive the number of pages are
>> attached to this mail. apparently it's look like the PDFParser do not
>> read correctly the /Pages object the ref of pages are "8 0" and "19
>> 0".
>>
>> I open the document correctly with adobe reader and itextrups, both
>> retrieve the correct number of pages : 2.
>>
>> I try to run my code using the version 1.7.0 of PDFBox
>>
>> Thanks in advance for your help.
>>
>> Best regards
>>
>> Pierre Huttin


-- 

  Timo Boehme
  OntoChem GmbH
  H.-Damerow-Str. 4
  06120 Halle/Saale
  T: +49 345 4780474
  F: +49 345 4780471
  timo.boehme@ontochem.com

_____________________________________________________________________

  OntoChem GmbH
  Geschäftsführer: Dr. Lutz Weber
  Sitz: Halle / Saale
  Registergericht: Stendal
  Registernummer: HRB 215461
_____________________________________________________________________


Re: PropertySelection component and the disabled field

Posted by Glen Stampoultzis <gs...@iinet.net.au>.
At 10:53 PM 1/09/2004, you wrote:
>If you want to keep its value between submits, why are you setting 
>disabled to true?

Because I don't want the user to change the value.

>Not all browsers (IIUC) respect the disabled flag.  Tapestry does 
>something special on the server-side also - and prevents the value 
>submitted from calling the setter in the binding.

Yes. I noticed that.  I've written a replacement component that makes sure 
the value is submitted by using a hidden field when it's disabled.  The 
only problem I had was that I had to change the value parameter direction 
from form to auto to get it to work.  I think I'm missing something 
important about the way form direction works.

I'll post the code to the list shortly.



>On Sep 1, 2004, at 6:42 AM, Glen Stampoultzis wrote:
>
>>
>>Never mind... I think I know the answer to this.  It seems that this is a 
>>feature of HTML.  I guess there's nothing stopping us from creating a 
>>hidden field automatically though is there?
>>
>>At 08:32 PM 1/09/2004, you wrote:
>>
>>>The PropertySelection component doesn't update the value when the 
>>>disabled field is true.  I'm just wondering why this is the case?   I 
>>>would have thought it's much more common to want to submit the value so 
>>>that you don't have to revert to tricks to keep it's value between submits.
>>>
>>>Regards,
>>>
>>>
>>>Glen Stampoultzis
>>>gstamp@iinet.net.au
>>>http://members.iinet.net.au/~gstamp/glen/
>>>
>>>
>>>---------------------------------------------------------------------
>>>To unsubscribe, e-mail: tapestry-user-unsubscribe@jakarta.apache.org
>>>For additional commands, e-mail: tapestry-user-help@jakarta.apache.org
>>
>>
>>Glen Stampoultzis
>>gstamp@iinet.net.au
>>http://members.iinet.net.au/~gstamp/glen/
>
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: tapestry-user-unsubscribe@jakarta.apache.org
>For additional commands, e-mail: tapestry-user-help@jakarta.apache.org
>


Glen Stampoultzis
gstamp@iinet.net.au
http://members.iinet.net.au/~gstamp/glen/

Re: PropertySelection component and the disabled field

Posted by Harish Krishnaswamy <ha...@gmail.com>.
I think this is something we'll probably need if we are running
stateless. I have had this problem before and had to manually use
hidden fields to pass 'em along. Previously Howard suggested using a
single hidden field for all disabled components, which I agree is a
good way to go. May be we should have a flag to switch it on and off.

-Harish

On Wed, 1 Sep 2004 08:53:14 -0400, Erik Hatcher
<er...@ehatchersolutions.com> wrote:
> If you want to keep its value between submits, why are you setting
> disabled to true?
> 
> Not all browsers (IIUC) respect the disabled flag.  Tapestry does
> something special on the server-side also - and prevents the value
> submitted from calling the setter in the binding.
> 
>         Erik
> 
> 
> 
> 
> On Sep 1, 2004, at 6:42 AM, Glen Stampoultzis wrote:
> 
> >
> > Never mind... I think I know the answer to this.  It seems that this
> > is a feature of HTML.  I guess there's nothing stopping us from
> > creating a hidden field automatically though is there?
> >
> > At 08:32 PM 1/09/2004, you wrote:
> >
> >> The PropertySelection component doesn't update the value when the
> >> disabled field is true.  I'm just wondering why this is the case?   I
> >> would have thought it's much more common to want to submit the value
> >> so that you don't have to revert to tricks to keep it's value between
> >> submits.
> >>
> >> Regards,
> >>
> >>
> >> Glen Stampoultzis
> >> gstamp@iinet.net.au
> >> http://members.iinet.net.au/~gstamp/glen/
> >>
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: tapestry-user-unsubscribe@jakarta.apache.org
> >> For additional commands, e-mail: tapestry-user-help@jakarta.apache.org
> >>
> >
> >
> > Glen Stampoultzis
> > gstamp@iinet.net.au
> > http://members.iinet.net.au/~gstamp/glen/
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: tapestry-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: tapestry-user-help@jakarta.apache.org
> 
>

---------------------------------------------------------------------
To unsubscribe, e-mail: tapestry-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tapestry-user-help@jakarta.apache.org


Re: PropertySelection component and the disabled field

Posted by Erik Hatcher <er...@ehatchersolutions.com>.
If you want to keep its value between submits, why are you setting 
disabled to true?

Not all browsers (IIUC) respect the disabled flag.  Tapestry does 
something special on the server-side also - and prevents the value 
submitted from calling the setter in the binding.

	Erik


On Sep 1, 2004, at 6:42 AM, Glen Stampoultzis wrote:

>
> Never mind... I think I know the answer to this.  It seems that this 
> is a feature of HTML.  I guess there's nothing stopping us from 
> creating a hidden field automatically though is there?
>
> At 08:32 PM 1/09/2004, you wrote:
>
>> The PropertySelection component doesn't update the value when the 
>> disabled field is true.  I'm just wondering why this is the case?   I 
>> would have thought it's much more common to want to submit the value 
>> so that you don't have to revert to tricks to keep it's value between 
>> submits.
>>
>> Regards,
>>
>>
>> Glen Stampoultzis
>> gstamp@iinet.net.au
>> http://members.iinet.net.au/~gstamp/glen/
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: tapestry-user-unsubscribe@jakarta.apache.org
>> For additional commands, e-mail: tapestry-user-help@jakarta.apache.org
>>
>
>
> Glen Stampoultzis
> gstamp@iinet.net.au
> http://members.iinet.net.au/~gstamp/glen/


---------------------------------------------------------------------
To unsubscribe, e-mail: tapestry-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: tapestry-user-help@jakarta.apache.org


Re: [MERGE-TRACKING][PATCH] Make use of the svn_client_commit_item2_t->wcprop_changes member during commit of wc to repos copy

Posted by Daniel Rall <dl...@collab.net>.
On Thu, 14 Dec 2006, Madan S. wrote:

> On Fri, 08 Dec 2006 06:29:25 +0530, Daniel Rall <dl...@collab.net> wrote:
> 
> >On Thu, 07 Dec 2006, Madan S. wrote:
> >
> >>On Tue, 05 Dec 2006 23:45:32 +0530, Daniel Rall <dl...@collab.net> wrote:
> >>
> >>>On Tue, 05 Dec 2006, Madan S. wrote:
> >>>
> >>>>On Tue, 05 Dec 2006 00:42:39 +0530, Daniel Rall <dl...@collab.net>  
> >>wrote:
> >...
> >>>Yes, the name doesn't match exactly what we want to do, as we're not
> >>>actually changing any WC properties here.  Too bad the field isn't
> >>>named prop_changes.  Hmmmmmm.
> >>>
> >>>All our options here appear to involving rev'ing the data structure
> >>>and its callers.  :-\
> >>>
> >>>a) Rename wcprop_changes to prop_changes, and use it for both extra
> >>>client -> repos and repos -> WC property edits, with an API contract
> >>>indicating that it must be cleared after the client -> repos prop
> >>>edits.  This could be confusing to developers using the structure for
> >>>the first time, but might use less memory.
> >>>
> >>>b) Add a repos_prop_changes field for client -> repos prop edits.
> >>>
> >>>Either way, wcprop_changes could REALLY use some better documentation!
> >>
> >>I would opt for (a) which would mean one more commit (to rename and
> >>change all occurances of the old name) before we can check this into
> >>trunk.
> >
> >Why do you think option (a) is better?  I was leaning towards (b) on
> >the grounds that it could provide a more obvious API, but I could
> >really go either way.
> 
> Because the two usages we have currently are mutually exclusive and we  
> could reuse the same field. To me, just one field to carry some extra  
> commit related information around... also we dont want a pointer in this  
> structure that points to nothing for each and every commit, and used  
> *only* in case of a wc to repos copy. I would prefer to reuse some  
> existing (if apt) variable in the structure if am not affecting the  
> existing functionality.

I was typing up a response on this thread last week, but my laptop ran
out of power and died before I had a chance to send it:

  "Madan, Mike, and I discussed this in a phone conversation.  Mike
  and I favored option (b) for clarity in a public data structure, so
  I've gone that direction.  I've committed a patch to trunk making
  incoming_prop_changes (replaceing wcprop_changes) and
  outgoing_prop_changes fields available on the
  svn_client_commit_item3_t struct."

So Madan, the client committables API is available to support WC ->
repos copy on the merge-tracking branch, if not exactly in your
preferred form.  :)

Thanks, Dan

Re: [MERGE-TRACKING][PATCH] Make use of the svn_client_commit_item2_t->wcprop_changes member during commit of wc to repos copy

Posted by Madan U Sreenivasan <ma...@collab.net>.
On Fri, 08 Dec 2006 06:29:25 +0530, Daniel Rall <dl...@collab.net> wrote:

> On Thu, 07 Dec 2006, Madan S. wrote:
>
>> On Tue, 05 Dec 2006 23:45:32 +0530, Daniel Rall <dl...@collab.net> wrote:
>>
>> >On Tue, 05 Dec 2006, Madan S. wrote:
>> >
>> >>On Tue, 05 Dec 2006 00:42:39 +0530, Daniel Rall <dl...@collab.net>  
>> wrote:
> ...
>> >Yes, the name doesn't match exactly what we want to do, as we're not
>> >actually changing any WC properties here.  Too bad the field isn't
>> >named prop_changes.  Hmmmmmm.
>> >
>> >All our options here appear to involving rev'ing the data structure
>> >and its callers.  :-\
>> >
>> >a) Rename wcprop_changes to prop_changes, and use it for both extra
>> >client -> repos and repos -> WC property edits, with an API contract
>> >indicating that it must be cleared after the client -> repos prop
>> >edits.  This could be confusing to developers using the structure for
>> >the first time, but might use less memory.
>> >
>> >b) Add a repos_prop_changes field for client -> repos prop edits.
>> >
>> >Either way, wcprop_changes could REALLY use some better documentation!
>>
>> I would opt for (a) which would mean one more commit (to rename and
>> change all occurances of the old name) before we can check this into
>> trunk.
>
> Why do you think option (a) is better?  I was leaning towards (b) on
> the grounds that it could provide a more obvious API, but I could
> really go either way.

Because the two usages we have currently are mutually exclusive and we  
could reuse the same field. To me, just one field to carry some extra  
commit related information around... also we dont want a pointer in this  
structure that points to nothing for each and every commit, and used  
*only* in case of a wc to repos copy. I would prefer to reuse some  
existing (if apt) variable in the structure if am not affecting the  
existing functionality.

[snip]

Regards,
Madan.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: [MERGE-TRACKING][PATCH] Make use of the svn_client_commit_item2_t->wcprop_changes member during commit of wc to repos copy

Posted by Daniel Rall <dl...@collab.net>.
On Tue, 12 Dec 2006, Madan S. wrote:

> On Sat, 09 Dec 2006 02:27:03 +0530, Daniel Rall <dl...@collab.net> wrote:
...
> >Madan, I'm working on rev'ing the API in a fashion which will allow
> >for either of these options (since it turned out to be an enormous
> >amount of work).  If you want to tweak your WC without concerns to
> >backwards compatibility to complete the WC -> repos copy/move Merge
> >Tracking work, that would be awesome.  I'll fill in on trunk.
> 
> That would be great... that would also help me move on to the two other  
> copy/move commands... Thank you, Dan.

Madan, I've rev'd the API -- svn_client_commit_item3_t is now
available to the Subversion libraries, the command-line client, and
all the bindings, on both trunk and the merge-tracking branch.  I have
not yet changed its definition, as we were still discussing how it
should look.

I was leaning towards option B (a new field, name repos_prop_changes
or something), but you seemed to favor option A (change the name of
the existing field).  Let's complete that discussion (from previous
messages on this thread).

- Dan

Re: [MERGE-TRACKING][PATCH] Make use of the svn_client_commit_item2_t->wcprop_changes member during commit of wc to repos copy

Posted by Madan U Sreenivasan <ma...@collab.net>.
On Sat, 09 Dec 2006 02:27:03 +0530, Daniel Rall <dl...@collab.net> wrote:

[snip]

>
> Madan, I'm working on rev'ing the API in a fashion which will allow
> for either of these options (since it turned out to be an enormous
> amount of work).  If you want to tweak your WC without concerns to
> backwards compatibility to complete the WC -> repos copy/move Merge
> Tracking work, that would be awesome.  I'll fill in on trunk.

That would be great... that would also help me move on to the two other  
copy/move commands... Thank you, Dan.

Regards,
Madan.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: [MERGE-TRACKING][PATCH] Make use of the svn_client_commit_item2_t->wcprop_changes member during commit of wc to repos copy

Posted by Daniel Rall <dl...@collab.net>.
On Thu, 07 Dec 2006, Daniel Rall wrote:

> On Thu, 07 Dec 2006, Madan S. wrote:
> 
> > On Tue, 05 Dec 2006 23:45:32 +0530, Daniel Rall <dl...@collab.net> wrote:
> > 
> > >On Tue, 05 Dec 2006, Madan S. wrote:
> > >
> > >>On Tue, 05 Dec 2006 00:42:39 +0530, Daniel Rall <dl...@collab.net> wrote:
> ...
> > >Yes, the name doesn't match exactly what we want to do, as we're not
> > >actually changing any WC properties here.  Too bad the field isn't
> > >named prop_changes.  Hmmmmmm.
> > >
> > >All our options here appear to involving rev'ing the data structure
> > >and its callers.  :-\
> > >
> > >a) Rename wcprop_changes to prop_changes, and use it for both extra
> > >client -> repos and repos -> WC property edits, with an API contract
> > >indicating that it must be cleared after the client -> repos prop
> > >edits.  This could be confusing to developers using the structure for
> > >the first time, but might use less memory.
> > >
> > >b) Add a repos_prop_changes field for client -> repos prop edits.
> > >
> > >Either way, wcprop_changes could REALLY use some better documentation!
> > 
> > I would opt for (a) which would mean one more commit (to rename and
> > change all occurances of the old name) before we can check this into
> > trunk.
> 
> Why do you think option (a) is better?  I was leaning towards (b) on
> the grounds that it could provide a more obvious API, but I could
> really go either way.
> 
> If we go with option (a), a single commit is actually better, since we
> have to rev API, since we're changing a field name in the the public
> the commit item data structure.

Madan, I'm working on rev'ing the API in a fashion which will allow
for either of these options (since it turned out to be an enormous
amount of work).  If you want to tweak your WC without concerns to
backwards compatibility to complete the WC -> repos copy/move Merge
Tracking work, that would be awesome.  I'll fill in on trunk.

Thanks, Dan

Re: [MERGE-TRACKING][PATCH] Make use of the svn_client_commit_item2_t->wcprop_changes member during commit of wc to repos copy

Posted by Daniel Rall <dl...@collab.net>.
On Thu, 07 Dec 2006, Madan S. wrote:

> On Tue, 05 Dec 2006 23:45:32 +0530, Daniel Rall <dl...@collab.net> wrote:
> 
> >On Tue, 05 Dec 2006, Madan S. wrote:
> >
> >>On Tue, 05 Dec 2006 00:42:39 +0530, Daniel Rall <dl...@collab.net> wrote:
...
> >Yes, the name doesn't match exactly what we want to do, as we're not
> >actually changing any WC properties here.  Too bad the field isn't
> >named prop_changes.  Hmmmmmm.
> >
> >All our options here appear to involving rev'ing the data structure
> >and its callers.  :-\
> >
> >a) Rename wcprop_changes to prop_changes, and use it for both extra
> >client -> repos and repos -> WC property edits, with an API contract
> >indicating that it must be cleared after the client -> repos prop
> >edits.  This could be confusing to developers using the structure for
> >the first time, but might use less memory.
> >
> >b) Add a repos_prop_changes field for client -> repos prop edits.
> >
> >Either way, wcprop_changes could REALLY use some better documentation!
> 
> I would opt for (a) which would mean one more commit (to rename and
> change all occurances of the old name) before we can check this into
> trunk.

Why do you think option (a) is better?  I was leaning towards (b) on
the grounds that it could provide a more obvious API, but I could
really go either way.

If we go with option (a), a single commit is actually better, since we
have to rev API, since we're changing a field name in the the public
the commit item data structure.


> >>But I guess like you said above, setting restting the wcprop_changes  
> >>with a new array (not just setting nelts to 0) would not interfere with
> >>the current implementation.
...
> >The doc string on apr_array_header_t's nelts field says "The number of
> >active elements in the array".  It has a separate field, nalloc, which
> >represents "The number of elements allocated in the array".  My
> >understanding of apr_array_header_t is that manipulating nelts is an
> >acceptable way to shrink or clear the list (though there's also the
> >apr_array_pop() API).
> >
> >process_committed_leaf() makes proper use of the nelts field:
> >
> >  /* Do wcprops in the same log txn as revision, etc. */
> >  if (wcprop_changes && (wcprop_changes->nelts > 0))
> >    {
> 
> True, but this sanity should not be expected by default. Am not an expert  
> in APR, so you are most probably right... but I would prefer to err on the  
> right side :)
...

I've confirmed this with APR's core developers, and submitted a patch
to APR adding an apr_array_clear() API which does exactly this (sets
nelts to 0).  We can depend on this behavior.

Re: [MERGE-TRACKING][PATCH] Make use of the svn_client_commit_item2_t->wcprop_changes member during commit of wc to repos copy

Posted by Madan U Sreenivasan <ma...@collab.net>.
On Tue, 05 Dec 2006 23:45:32 +0530, Daniel Rall <dl...@collab.net> wrote:

> On Tue, 05 Dec 2006, Madan S. wrote:
>
>> On Tue, 05 Dec 2006 00:42:39 +0530, Daniel Rall <dl...@collab.net> wrote:
>>
>> >On Mon, 04 Dec 2006, Madan S. wrote:

[snip]

> Yes, the name doesn't match exactly what we want to do, as we're not
> actually changing any WC properties here.  Too bad the field isn't
> named prop_changes.  Hmmmmmm.
>
> All our options here appear to involving rev'ing the data structure
> and its callers.  :-\
>
> a) Rename wcprop_changes to prop_changes, and use it for both extra
> client -> repos and repos -> WC property edits, with an API contract
> indicating that it must be cleared after the client -> repos prop
> edits.  This could be confusing to developers using the structure for
> the first time, but might use less memory.
>
> b) Add a repos_prop_changes field for client -> repos prop edits.
>
> Either way, wcprop_changes could REALLY use some better documentation!

I would opt for (a) which would mean one more commit (to rename and change  
all occurances of the old name) before we can check this into trunk.

>> But I guess like you said above, setting restting the wcprop_changes  
>> with
>> a new array (not just setting nelts to 0) would not interfere with the
>> current implementation.
>
> Really?  I didn't notice any place where it might interfere.
>
> The doc string on apr_array_header_t's nelts field says "The number of
> active elements in the array".  It has a separate field, nalloc, which
> represents "The number of elements allocated in the array".  My
> understanding of apr_array_header_t is that manipulating nelts is an
> acceptable way to shrink or clear the list (though there's also the
> apr_array_pop() API).
>
> process_committed_leaf() makes proper use of the nelts field:
>
>   /* Do wcprops in the same log txn as revision, etc. */
>   if (wcprop_changes && (wcprop_changes->nelts > 0))
>     {
>

True, but this sanity should not be expected by default. Am not an expert  
in APR, so you are most probably right... but I would prefer to err on the  
right side :)

[snip]

>> >A change like this should be committed to trunk first, as it
>> >introduces a general non-WC property setting mechanism which is
>> >decoupled from Merge Tracking.
>>
>> Agree. Will submit a patch to trunk after testing on all ra_xxx's.
>> Does ra_serf need to be tested too?
>
> Yes, but if you get a patch working with the other three, I can give
> it a whirl over ra_serf.

hmm, I tested it over trunk, and it fails over ra_dav... will check out  
the problem and let you know...

Thanks for offering to test over ra_serf.

Regards,
Madan.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: [MERGE-TRACKING][PATCH] Make use of the svn_client_commit_item2_t->wcprop_changes member during commit of wc to repos copy

Posted by Daniel Rall <dl...@collab.net>.
On Tue, 05 Dec 2006, Madan S. wrote:

> On Tue, 05 Dec 2006 00:42:39 +0530, Daniel Rall <dl...@collab.net> wrote:
> 
> >On Mon, 04 Dec 2006, Madan S. wrote:
> 
> [snip]
> 
> >I have a question/concern about the use of svn_client_commit_item2_t's
> >wcprop_changes field (which references
> >http://subversion.tigris.org/issues/show_bug.cgi?id=806).
> >
> >The doc string on libsvn_client/ra.c:push_wc_prop() mentions that it
> >"implements the 'svn_ra_push_wc_prop_func_t' interface".  This
> >function does make use of wcprop_changes, which contains property data
> >sent back from the repository for *post-commit* processing.  That is,
> >it appears that any data left in wcprop_changes after a commit occurs
> >is intended to trigger some additional processing which sets its
> >property data on the WC.  See 'svn di -r 3632:3635', which shows
> >wcprop_changes propogated down to
> >libsvn_wc/adm_ops.c:process_committed_leaf(), by way of what's now the
> >svn_wc_process_committed4() -- or _committed5, on the merge-tracking
> >branch -- API.
> >
> >While I didn't notice anything in particular which would prevent us
> >from using wcprop_changes as a pre-commit queue of additional property
> >changes to send from the client to the repository, won't we need to
> >clear out the wcprop_changes array after sending the data to the repos
> >as part of the commit to avoid doing unnecessary and incorrect
> >post-commit processing (which sets the WC's properties to what we sent
> >to the repos)?  Something along the lines of a
> >"item->wcprop_changes->nelts = 0;" as the last statement at the end of
> >your "if (item->wcprop_changes)" block might suffice.
> 
> You are correct. I didn't understand this logic earlier... but again, as  
> you say, this should not prevent us from using the member prior to commit  
> on the repos (maybe the name doesnt exactly match what we want to do ? -  
> as it says 'wcprop' in a different sense?).

Yes, the name doesn't match exactly what we want to do, as we're not
actually changing any WC properties here.  Too bad the field isn't
named prop_changes.  Hmmmmmm.

All our options here appear to involving rev'ing the data structure
and its callers.  :-\

a) Rename wcprop_changes to prop_changes, and use it for both extra
client -> repos and repos -> WC property edits, with an API contract
indicating that it must be cleared after the client -> repos prop
edits.  This could be confusing to developers using the structure for
the first time, but might use less memory.

b) Add a repos_prop_changes field for client -> repos prop edits.

Either way, wcprop_changes could REALLY use some better documentation!


> But I guess like you said above, setting restting the wcprop_changes with  
> a new array (not just setting nelts to 0) would not interfere with the  
> current implementation.

Really?  I didn't notice any place where it might interfere.

The doc string on apr_array_header_t's nelts field says "The number of
active elements in the array".  It has a separate field, nalloc, which
represents "The number of elements allocated in the array".  My
understanding of apr_array_header_t is that manipulating nelts is an
acceptable way to shrink or clear the list (though there's also the
apr_array_pop() API).

process_committed_leaf() makes proper use of the nelts field:

  /* Do wcprops in the same log txn as revision, etc. */
  if (wcprop_changes && (wcprop_changes->nelts > 0))
    {

*shrug*


> >A change like this should be committed to trunk first, as it
> >introduces a general non-WC property setting mechanism which is
> >decoupled from Merge Tracking.
> 
> Agree. Will submit a patch to trunk after testing on all ra_xxx's.
> Does ra_serf need to be tested too?

Yes, but if you get a patch working with the other three, I can give
it a whirl over ra_serf.

> Thanks for the review, Dan.

You're welcome, Madan!

- Dan

Re: [MERGE-TRACKING][PATCH] Make use of the svn_client_commit_item2_t->wcprop_changes member during commit of wc to repos copy

Posted by Madan U Sreenivasan <ma...@collab.net>.
On Tue, 05 Dec 2006 00:42:39 +0530, Daniel Rall <dl...@collab.net> wrote:

> On Mon, 04 Dec 2006, Madan S. wrote:

[snip]

> I have a question/concern about the use of svn_client_commit_item2_t's
> wcprop_changes field (which references
> http://subversion.tigris.org/issues/show_bug.cgi?id=806).
>
> The doc string on libsvn_client/ra.c:push_wc_prop() mentions that it
> "implements the 'svn_ra_push_wc_prop_func_t' interface".  This
> function does make use of wcprop_changes, which contains property data
> sent back from the repository for *post-commit* processing.  That is,
> it appears that any data left in wcprop_changes after a commit occurs
> is intended to trigger some additional processing which sets its
> property data on the WC.  See 'svn di -r 3632:3635', which shows
> wcprop_changes propogated down to
> libsvn_wc/adm_ops.c:process_committed_leaf(), by way of what's now the
> svn_wc_process_committed4() -- or _committed5, on the merge-tracking
> branch -- API.
>
> While I didn't notice anything in particular which would prevent us
> from using wcprop_changes as a pre-commit queue of additional property
> changes to send from the client to the repository, won't we need to
> clear out the wcprop_changes array after sending the data to the repos
> as part of the commit to avoid doing unnecessary and incorrect
> post-commit processing (which sets the WC's properties to what we sent
> to the repos)?  Something along the lines of a
> "item->wcprop_changes->nelts = 0;" as the last statement at the end of
> your "if (item->wcprop_changes)" block might suffice.

You are correct. I didn't understand this logic earlier... but again, as  
you say, this should not prevent us from using the member prior to commit  
on the repos (maybe the name doesnt exactly match what we want to do ? -  
as it says 'wcprop' in a different sense?).

But I guess like you said above, setting restting the wcprop_changes with  
a new array (not just setting nelts to 0) would not interfere with the  
current implementation.


> A change like this should be committed to trunk first, as it
> introduces a general non-WC property setting mechanism which is
> decoupled from Merge Tracking.

Agree. Will submit a patch to trunk after testing on all ra_xxx s. Does  
ra_serf need to be tested too?

Thanks for the review, Dan.

Regards,
Madan.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@subversion.tigris.org
For additional commands, e-mail: dev-help@subversion.tigris.org

Re: [MERGE-TRACKING][PATCH] Make use of the svn_client_commit_item2_t->wcprop_changes member during commit of wc to repos copy

Posted by Daniel Rall <dl...@collab.net>.
On Mon, 04 Dec 2006, Madan S. wrote:

>    Please find attached a preparatory step to record copyfrom info on a wc  
> to repos copy. This patch makes use of the currently unused wcprop_changes  
> member of the svn_client_commit_item2_t structure, to push props  
> explicitly set in wcprop_changes to the repository editor.

This looks like the right spot to push the additional "svn:mergeinfo"
property information from the client to the repository.  I like the
direct editor drive here in libsvn_client's do_item_commit(), rather
than in svn_wc_transmit_prop_deltas(), because in WC -> repos
copy/move, we don't intend to set this merge info on the WC -- we only
set it on the repository on the copy/move destination.

>    The next step would be to calculate the merge info and fill the
> wcprop_changes member in libsvn_client/copy.c:wc_to_repos_copy().

I have a question/concern about the use of svn_client_commit_item2_t's
wcprop_changes field (which references
http://subversion.tigris.org/issues/show_bug.cgi?id=806).

The doc string on libsvn_client/ra.c:push_wc_prop() mentions that it
"implements the 'svn_ra_push_wc_prop_func_t' interface".  This
function does make use of wcprop_changes, which contains property data
sent back from the repository for *post-commit* processing.  That is,
it appears that any data left in wcprop_changes after a commit occurs
is intended to trigger some additional processing which sets its
property data on the WC.  See 'svn di -r 3632:3635', which shows
wcprop_changes propogated down to
libsvn_wc/adm_ops.c:process_committed_leaf(), by way of what's now the
svn_wc_process_committed4() -- or _committed5, on the merge-tracking
branch -- API.

While I didn't notice anything in particular which would prevent us
from using wcprop_changes as a pre-commit queue of additional property
changes to send from the client to the repository, won't we need to
clear out the wcprop_changes array after sending the data to the repos
as part of the commit to avoid doing unnecessary and incorrect
post-commit processing (which sets the WC's properties to what we sent
to the repos)?  Something along the lines of a
"item->wcprop_changes->nelts = 0;" as the last statement at the end of
your "if (item->wcprop_changes)" block might suffice.

A change like this should be committed to trunk first, as it
introduces a general non-WC property setting mechanism which is
decoupled from Merge Tracking.


>    I have tested this code for functionality with a simple script I wrote.  
> I have run 'make check' on local to test for regression, which I beleive  
> is sufficient given the change is only on the client side. Please let me  
> know if theres anything else I should be doing.

Changes like this which drive a RA editor should be tested over all RA
mechanisms.

...
> Preparatory step for recording copyfrom info on wc to repos copy.
> 
> On the merge-tracking branch:
> 
> * subversion/libsvn_client/commit_util.c
>   (do_item_commit): Take into account the item's wcprop_changes value when
>    the item's state flag has SVN_CLIENT_COMMIT_ITEM_PROP_MODS set.
...
> --- subversion/libsvn_client/commit_util.c	(revision 22554)
> +++ subversion/libsvn_client/commit_util.c	(working copy)
> @@ -1180,6 +1180,28 @@
>            tempfile = apr_pstrdup(apr_hash_pool_get(tempfiles), tempfile);
>            apr_hash_set(tempfiles, tempfile, APR_HASH_KEY_STRING, (void *)1);
>          }
> +
> +      /* Set other prop-changes, if available in the baton */
> +      if (item->wcprop_changes)
> +        {
> +          svn_prop_t *prop;
> +          apr_array_header_t *prop_changes = item->wcprop_changes;
> +          int ctr;
> +          for (ctr = 0; ctr < prop_changes->nelts; ctr++)
> +            {
> +              prop = APR_ARRAY_IDX(prop_changes, ctr, svn_prop_t *);
> +              if (kind == svn_node_file)
> +                {
> +                  editor->change_file_prop(file_baton, prop->name,
> +                                           prop->value, pool);
> +                }
> +              else
> +                {
> +                  editor->change_dir_prop(*dir_baton, prop->name,
> +                                          prop->value, pool);
> +                }
> +            }
> +        }
>      }
>  
>    /* Finally, handle text mods (in that we need to open a file if it

RE: RE: How should permissions be set in a shared workspace environment?

Posted by Ch...@qimonda.com.
Isn't this where an integration branch will come into play. You can
either use the
trunk as an integration branch, or a real SVN branch. With it, only
merged, and 
hence changed, code will be re-built? Unless that is NOT true with
UniData.


________________________________

	From: James Oltmans [mailto:joltmans@bolosystems.com] 
	Sent: Wednesday, April 11, 2007 4:43 PM
	To: Matt Sickler
	Cc: users@subversion.tigris.org
	Subject: RE: How should permissions be set in a shared workspace
environment?
	
	

	We realize it is not the best idea and that by using individual
workspaces we would automatically avoid this problem. However, we have
other problems that make the individual workspaces solution a very
unattractive option at this time. 

	We have tried giving them their own copies. They did not like
the 20-30 mins rebuild turnaround time (we do not have a quick-build
option, we are working with UniData, not C++ or Java) and the fact that
any time they wanted to see a project team-member's contribution they
needed to rebuild. We also managed to fill up the hard drive on the
server pretty quickly with 2.5 gig a pop workspaces. 

	 

	Given these challenges we would prefer to use a shared
workspace. If you know for certain this is not possible, that is an
acceptable answer; otherwise we'd like to know if anyone has experience
with doing things "the wrong way" with a shared workspace. 

	 

	
________________________________


	From: Matt Sickler [mailto:crazyfordynamite@gmail.com] 
	Sent: Wednesday, April 11, 2007 2:07 PM
	To: James Oltmans
	Cc: users@subversion.tigris.org
	Subject: Re: How should permissions be set in a shared workspace
environment?

	 

	its normal for several devs to work on the same _repository_
but having more than one per _working copy_ is a very bad idea
	at least try to get each one their own copy and problems like
this are automatically avoided

	On 4/10/07, James Oltmans <jo...@bolosystems.com> wrote:

	Hey all, I've got a unix question.

	 

	We're working in an environment where one or more developers
will access and work on code in the same repository. Yes, I know that's
not standard practice but we're dealing with some space limitations,
developer impatience (no one likes to wait 30 minutes to rebuild) and no
good way to only rebuild part of the working copy.

	Anyway, the subversion problem we're having is that developer A
connect to his workspace, edits some files, checks them in and is happy.
Developer B connects to the same workspace, alters the same or different
files in the same directory that developer A edited files. Developer B
checks in his code and gets the following lovely error:

	Sending        foo/bar/FILE1

	Transmitting file data .svn: Commit succeeded, but other errors
follow:

	svn: Error bumping revisions post-commit (details follow):

	svn: In directory '/.../foo/bar'

	svn: Error processing command 'committed' in '/.../foo/bar'

	svn: Error replacing text-base of 'FILE1'

	svn: Can't change perms of file /.../foo/bar/FILE1': Operation
not permitted

	svn: Your commit message was left in a temporary file:

	svn:    '/.../foo/bar/svn-commit.tmp'

	 

	This leaves directory bar locked. Running svn cleanup can
sometimes resolve the problem unless another svn file is owned by
Developer A. In the case that it's not owned by A and cleanup works,
Developer B must subsequently run svn update to get the .svn directory
up to date. This usually results in files that were only changed once to
be flagged as merGed during the update because the repo version and the
current version are the same but the old text-base version is still in
the old state. 

	 

	We're running on Red Hat Enterprise Linux ES release 4 (Nahant
Update 3) with svn, version 1.4.0 (r21228)

	All developers are part of groupA and all files are
read/writable by the group. However, our .svn dirs look like the
following:

	total 36

	-r--r-----  1 DeveloperA groupA 232 Apr 10 18:57 all-wcprops

	-r--r-----  1 DeveloperA groupA 58 Apr 10 12:58 dir-prop-base

	-r--r-----  1 DeveloperA groupA 59 Apr 10 19:09 dir-props

	-r--r-----  1 DeveloperA groupA 475 Apr 10 19:09 entries

	-r--r-----  1 DeveloperA groupA 2 Apr 10 12:58 format

	drwxrwx---  2 DeveloperA groupA 4096 Apr 10 12:58 prop-base

	drwxrwx---  2 DeveloperA groupA 4096 Apr 10 12:58 props

	drwxrwx---  2 DeveloperA groupA 4096 Apr 10 12:58 text-base

	drwxrwx---  5 DeveloperA groupA 4096 Apr 10 19:09 tmp

	 

	Is there some default set of permissions that Subversion uses
when creating these files? How do I get around this permissions issue
when the files that are being denied access were created by Subversion?

	 

	Thanks!

	James Oltmans
	SCM Administrator

	Bolo Systems, Inc. 

	 

	 

	 


Re: How should permissions be set in a shared workspace environment?

Posted by Steve Bakke <st...@amd.com>.


On 4/11/07 4:42 PM, "James Oltmans" <jo...@bolosystems.com> wrote:

> We realize it is not the best idea and that by using individual workspaces we
> would automatically avoid this problem. However, we have other problems that
> make the individual workspaces solution a very unattractive option at this
> time. 
> We have tried giving them their own copies. They did not like the 20-30 mins
> rebuild turnaround time (we do not have a quick-build option, we are working
> with UniData, not C++ or Java) and the fact that any time they wanted to see a
> project team-member¹s contribution they needed to rebuild. We also managed to
> fill up the hard drive on the server pretty quickly with 2.5 gig a pop
> workspaces. 
>  
> Given these challenges we would prefer to use a shared workspace. If you know
> for certain this is not possible, that is an acceptable answer; otherwise we¹d
> like to know if anyone has experience with doing things ³the wrong way² with a
> shared workspace.
>  

Using a shared working copy is definitely possible. (we are currently doing
just that)  You need to make sure people's umask is set to be 2.  That said,
the way that we enforce it is that the commandline client is wrapped in a
script which automatically sets the permissions to be user+group rwx for any
directories and rw for any files.

Obviously when users create new directories on their own, they still need to
set proper permissions.  Just make sure that they source a standard shell
init script or something to make sure their umask is set.

-steve

> 
> 
> From: Matt Sickler [mailto:crazyfordynamite@gmail.com]
> Sent: Wednesday, April 11, 2007 2:07 PM
> To: James Oltmans
> Cc: users@subversion.tigris.org
> Subject: Re: How should permissions be set in a shared workspace environment?
>  
> its normal for several devs to work on the same _repository_  but having more
> than one per _working copy_ is a very bad idea
> at least try to get each one their own copy and problems like this are
> automatically avoided
> 
> On 4/10/07, James Oltmans <jo...@bolosystems.com> wrote:
> 
> Hey all, I've got a unix question.
> 
>  
> 
> We're working in an environment where one or more developers will access and
> work on code in the same repository. Yes, I know that's not standard practice
> but we're dealing with some space limitations, developer impatience (no one
> likes to wait 30 minutes to rebuild) and no good way to only rebuild part of
> the working copy.
> 
> Anyway, the subversion problem we're having is that developer A connect to his
> workspace, edits some files, checks them in and is happy. Developer B connects
> to the same workspace, alters the same or different files in the same
> directory that developer A edited files. Developer B checks in his code and
> gets the following lovely error:
> 
> Sending       foo/bar/FILE1
> 
> Transmitting file data .svn: Commit succeeded, but other errors follow:
> 
> svn: Error bumping revisions post-commit (details follow):
> 
> svn: In directory '/Š/foo/bar'
> 
> svn: Error processing command 'committed' in '/Š/foo/bar'
> 
> svn: Error replacing text-base of 'FILE1'
> 
> svn: Can't change perms of file /Š/foo/bar/FILE1': Operation not permitted
> 
> svn: Your commit message was left in a temporary file:
> 
> svn:    '/Š/foo/bar/svn-commit.tmp'
> 
>  
> 
> This leaves directory bar locked. Running svn cleanup can sometimes resolve
> the problem unless another svn file is owned by Developer A. In the case that
> it's not owned by A and cleanup works, Developer B must subsequently run svn
> update to get the .svn directory up to date. This usually results in files
> that were only changed once to be flagged as merGed during the update because
> the repo version and the current version are the same but the old text-base
> version is still in the old state.
> 
>  
> 
> We're running on Red Hat Enterprise Linux ES release 4 (Nahant Update 3) with
> svn, version 1.4.0 (r21228)
> 
> All developers are part of groupA and all files are read/writable by the
> group. However, our .svn dirs look like the following:
> 
> total 36
> 
> -r--r----- 1 DeveloperA groupA 232 Apr 10 18:57 all-wcprops
> 
> -r--r----- 1 DeveloperA groupA 58 Apr 10 12:58 dir-prop-base
> 
> -r--r----- 1 DeveloperA groupA 59 Apr 10 19:09 dir-props
> 
> -r--r----- 1 DeveloperA groupA 475 Apr 10 19:09 entries
> 
> -r--r----- 1 DeveloperA groupA 2 Apr 10 12:58 format
> 
> drwxrwx--- 2 DeveloperA groupA 4096 Apr 10 12:58 prop-base
> 
> drwxrwx--- 2 DeveloperA groupA 4096 Apr 10 12:58 props
> 
> drwxrwx--- 2 DeveloperA groupA 4096 Apr 10 12:58 text-base
> 
> drwxrwx--- 5 DeveloperA groupA 4096 Apr 10 19:09 tmp
> 
>  
> 
> Is there some default set of permissions that Subversion uses when creating
> these files? How do I get around this permissions issue when the files that
> are being denied access were created by Subversion?
> 
>  
> 
> Thanks!
> 
> James Oltmans
> SCM Administrator
> 
> Bolo Systems, Inc.
> 
>  
> 
>  
> 
>  
> 




---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org


RE: How should permissions be set in a shared workspace environment?

Posted by James Oltmans <jo...@bolosystems.com>.
We realize it is not the best idea and that by using individual
workspaces we would automatically avoid this problem. However, we have
other problems that make the individual workspaces solution a very
unattractive option at this time. 

We have tried giving them their own copies. They did not like the 20-30
mins rebuild turnaround time (we do not have a quick-build option, we
are working with UniData, not C++ or Java) and the fact that any time
they wanted to see a project team-member's contribution they needed to
rebuild. We also managed to fill up the hard drive on the server pretty
quickly with 2.5 gig a pop workspaces. 

 

Given these challenges we would prefer to use a shared workspace. If you
know for certain this is not possible, that is an acceptable answer;
otherwise we'd like to know if anyone has experience with doing things
"the wrong way" with a shared workspace. 

 

________________________________

From: Matt Sickler [mailto:crazyfordynamite@gmail.com] 
Sent: Wednesday, April 11, 2007 2:07 PM
To: James Oltmans
Cc: users@subversion.tigris.org
Subject: Re: How should permissions be set in a shared workspace
environment?

 

its normal for several devs to work on the same _repository_  but having
more than one per _working copy_ is a very bad idea
at least try to get each one their own copy and problems like this are
automatically avoided

On 4/10/07, James Oltmans <jo...@bolosystems.com> wrote:

Hey all, I've got a unix question.

 

We're working in an environment where one or more developers will access
and work on code in the same repository. Yes, I know that's not standard
practice but we're dealing with some space limitations, developer
impatience (no one likes to wait 30 minutes to rebuild) and no good way
to only rebuild part of the working copy.

Anyway, the subversion problem we're having is that developer A connect
to his workspace, edits some files, checks them in and is happy.
Developer B connects to the same workspace, alters the same or different
files in the same directory that developer A edited files. Developer B
checks in his code and gets the following lovely error:

Sending        foo/bar/FILE1

Transmitting file data .svn: Commit succeeded, but other errors follow:

svn: Error bumping revisions post-commit (details follow):

svn: In directory '/.../foo/bar'

svn: Error processing command 'committed' in '/.../foo/bar'

svn: Error replacing text-base of 'FILE1'

svn: Can't change perms of file /.../foo/bar/FILE1': Operation not
permitted

svn: Your commit message was left in a temporary file:

svn:    '/.../foo/bar/svn-commit.tmp'

 

This leaves directory bar locked. Running svn cleanup can sometimes
resolve the problem unless another svn file is owned by Developer A. In
the case that it's not owned by A and cleanup works, Developer B must
subsequently run svn update to get the .svn directory up to date. This
usually results in files that were only changed once to be flagged as
merGed during the update because the repo version and the current
version are the same but the old text-base version is still in the old
state. 

 

We're running on Red Hat Enterprise Linux ES release 4 (Nahant Update 3)
with svn, version 1.4.0 (r21228)

All developers are part of groupA and all files are read/writable by the
group. However, our .svn dirs look like the following:

total 36

-r--r-----  1 DeveloperA groupA 232 Apr 10 18:57 all-wcprops

-r--r-----  1 DeveloperA groupA 58 Apr 10 12:58 dir-prop-base

-r--r-----  1 DeveloperA groupA 59 Apr 10 19:09 dir-props

-r--r-----  1 DeveloperA groupA 475 Apr 10 19:09 entries

-r--r-----  1 DeveloperA groupA 2 Apr 10 12:58 format

drwxrwx---  2 DeveloperA groupA 4096 Apr 10 12:58 prop-base

drwxrwx---  2 DeveloperA groupA 4096 Apr 10 12:58 props

drwxrwx---  2 DeveloperA groupA 4096 Apr 10 12:58 text-base

drwxrwx---  5 DeveloperA groupA 4096 Apr 10 19:09 tmp

 

Is there some default set of permissions that Subversion uses when
creating these files? How do I get around this permissions issue when
the files that are being denied access were created by Subversion?

 

Thanks!

James Oltmans
SCM Administrator

Bolo Systems, Inc. 

 

 

 


Re: Creating First Repository

Posted by Tom Rawson <tr...@clayst.com>.
On 22 Nov 2004 Ben Collins-Sussman wrote:

> >      svnadmin.exe create --fs-type fsfs h:/svnrepos/project
> >      svn checkout file:///h:/svnrepos/project .
> >      svn add --non-recursive trunk/
> >      svn add --non-recursive trunk/html/
> >      svn add --non-recursive trunk/html/images/
> >      ... [more directories]
> >      svn add --targets allfiles.txt
> >      svn commit -m "Initial setup"
> >      svn list --verbose -R trunk/
> >
> 
> I have to admit, I'm looking at this script and I'm utterly confused... 
> it's so complex!  I don't understand what you're trying to accomplish.  
> Why not just import an entire tree into the empty repository, all at 
> once, in a single command?

Well that's what I thought too -- but I have lots of unversioned files 
in the directories in question (and some unversioned directories within 
the tree).  I didn't want to import the unversioned stuff of course, so 
I couldn't import the whole tree (right?).

So, I made a list of the files I wanted under version control in 
allfiles.txt.  But as far as I could tell svn add wouldn't add them 
unless the directories were already in the repository, so I had to add 
them first.

I'm new at this and happy to be corrected, but I spent a lot of time 
reading to try to figure out how to do what I wanted -- concisely 
described as "place files from an existing project under version 
control, where the files are mixed in a directory structure with files 
and subdirectories which are not going under version control".  [There 
are good reaosns for this structure that I wasn't prepared to trump 
with the needs of the version control system.]  I couldn't find 
anything that explained how to do this so I rolled my own.  If there's 
a better way I'd be glad to know!

Thanks,

--
Tom




---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

RE: Creating First Repository

Posted by Brass Tilde <br...@insightbb.com>.
> >      svnadmin.exe create --fs-type fsfs h:/svnrepos/project
> >      svn checkout file:///h:/svnrepos/project .
> >      svn add --non-recursive trunk/
> >      svn add --non-recursive trunk/html/
> >      svn add --non-recursive trunk/html/images/
> >      ... [more directories]
> >      svn add --targets allfiles.txt
> >      svn commit -m "Initial setup"
> >      svn list --verbose -R trunk/
> >
> 
> I have to admit, I'm looking at this script and I'm utterly 
> confused... it's so complex!  I don't understand what you're 
> trying to accomplish.  Why not just import an entire tree 
> into the empty repository, all at once, in a single command?

Because not all the files in the tree are supposed to be in version control?
Even if one wants to set the svn:ignore property to all the
files/directories to be ignored, it's still at least a five step process,
i.e. create the appropriate directories in the repository, check them out,
set the ignore property, commit the directory, then import the files in the
tree (unless the ignore property will be respected before committing).
Apologies if I'm wrong and this is covered later than Chapter 5 in "The
Book".

For instance, I have a .NET project that I'm using to familiarize myself
with SVN.  There are a few directories, namely OBJ, BIN and DEBUG, that I
don't want in the repository because everything in them, with one exception,
is creatable from my source code.  At the same time, in the BIN directory,
there is one DLL file, which comes from another vendor (the source for which
I've stored in SVN as well), that is copied from that directory into this
one.  It's not part of this project, but is necessary for it to build and
run correctly.

My solution was to include the OBJ, BIN and DEBUG names in the
global-ignores section of the server configuration file, along with the
various and sundry user files that VS.NET creates, and then import the
entire directory tree for the project.  It works for this project, but I
hesitate to consider it a final solution because, though I can't see it
happening in my own personal case, the exclusion of those directories and
files may not be universal (especially considering that I'm evaluating SVN
for use at work, where there are a gazillion more projects that my two or
three at home, in different languages and for different platforms).

I look forward to finding an easier way. :)



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org


Re: Creating First Repository

Posted by tr...@clayst.com.
On 22 Nov 2004 Ben Collins-Sussman wrote:

> >      svnadmin.exe create --fs-type fsfs h:/svnrepos/project
> >      svn checkout file:///h:/svnrepos/project .
> >      svn add --non-recursive trunk/
> >      svn add --non-recursive trunk/html/
> >      svn add --non-recursive trunk/html/images/
> >      ... [more directories]
> >      svn add --targets allfiles.txt
> >      svn commit -m "Initial setup"
> >      svn list --verbose -R trunk/
> >
> 
> I have to admit, I'm looking at this script and I'm utterly confused... 
> it's so complex!  I don't understand what you're trying to accomplish.  
> Why not just import an entire tree into the empty repository, all at 
> once, in a single command?

Well that's what I thought too -- but I have lots of unversioned files 
in the directories in question (and some unversioned directories within 
the tree).  I didn't want to import the unversioned stuff of course, so 
I couldn't import the whole tree (right?).

So, I made a list of the files I wanted under version control in 
allfiles.txt.  But as far as I could tell svn add wouldn't add them 
unless the directories were already in the repository, so I had to add 
them first.

I'm new at this and happy to be corrected, but I spent a lot of time 
reading to try to figure out how to do what I wanted -- concisely 
described as "place files from an existing project under version 
control, where the files are mixed in a directory structure with files 
and subdirectories which are not going under version control".  [There 
are good reaosns for this structure that I wasn't prepared to trump 
with the needs of the version control system.]  I couldn't find 
anything that explained how to do this so I rolled my own.  If there's 
a better way I'd be glad to know!

Thanks,

--
Tom




---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Creating First Repository

Posted by Ben Collins-Sussman <su...@collab.net>.
On Nov 22, 2004, at 4:35 PM, Tom Rawson wrote:
>
> I got it to work, but it ended up taking a little more than that.  It
> worked better for me to check out the empty project, then use add for
> everything.  Here's what I ended up with (I set up a script for this,
> so I could try different approaches and make mistakes -- it's a small
> project so deleting and retrying was OK):
>
>      svnadmin.exe create --fs-type fsfs h:/svnrepos/project
>      svn checkout file:///h:/svnrepos/project .
>      svn add --non-recursive trunk/
>      svn add --non-recursive trunk/html/
>      svn add --non-recursive trunk/html/images/
>      ... [more directories]
>      svn add --targets allfiles.txt
>      svn commit -m "Initial setup"
>      svn list --verbose -R trunk/
>

I have to admit, I'm looking at this script and I'm utterly confused... 
it's so complex!  I don't understand what you're trying to accomplish.  
Why not just import an entire tree into the empty repository, all at 
once, in a single command?


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: 2 users working on same file caused a change to be reverted rather than merged

Posted by Toby Thain <to...@telegraphics.com.au>.
On 17-Feb-09, at 11:21 PM, Andreas Schweigstill wrote:

> Hello!
>
> Toby Thain schrieb:
>> This is a flaw of *all* RAID-1 systems, software or hardware
>> (excepting the few in the high end that implement checksums).
>
> No, it is only related to some OS which do some performance
> optimizations. On Solaris one can set md_mirror:md_mirror_wow_flg=0x20
> which disables performance optimization.
>
>> It also risks integrity if the system suffers a sudden interruption
>> during writing (e.g. powerfail, panic, etc).
>
> That is another problem which can't be completely solved. But this
> has nearly nothing to do with normal operation.
>
>>> Nowadays these WOW problems "should" be non-existing but I am not
>>> dure since when Linux soft-RAID handles them correctly.
>>
>> There is no solution; the flaw is inherent in the design. *Non-
>> checksummed* RAID-1 cannot determine which side of the mirror holds
>> valid data.
>
> They just can't determine the valid mirror side in the case of an
> interruption or for computing clusters which access the same disks.
>
> And for normal use it is not important that both sides of the mirror
> contain the same data also on short timescales. It is sufficient that
> read operations on the same sector/block return consistent data.

There are several ways the mirror sides can get out of sync  
unbeknownst to the storage layer. Sudden interruption is only the  
most common. RAID-1 users should be aware that in general there is no  
recourse when it does.

--Toby

>
>> Sun's ZFS solves this problem entirely.
>
> ZFS solves many problems, but the WOW problem was/is mainly present on
> Solaris systems using the normal Sun Volume Manager with UFS.
>
> Regards
> Andreas Schweigstill
>
> -- 
> Dipl.-Phys. Andreas Schweigstill
> Schweigstill IT | Embedded Systems
> Schauenburgerstraße 116, D-24118 Kiel, Germany
> Phone: (+49) 431 53035-435, Fax: (+49) 431 53035-436
> Mobile: (+49) 171 6921973, Web: http://www.schweigstill.de/
>
> ------------------------------------------------------
> http://subversion.tigris.org/ds/viewMessage.do? 
> dsForumId=1065&dsMessageId=1183336
>
> To unsubscribe from this discussion, e-mail: [users- 
> unsubscribe@subversion.tigris.org].

------------------------------------------------------
http://subversion.tigris.org/ds/viewMessage.do?dsForumId=1065&dsMessageId=1186160

To unsubscribe from this discussion, e-mail: [users-unsubscribe@subversion.tigris.org].


Re: 2 users working on same file caused a change to be reverted rather than merged

Posted by Andreas Schweigstill <an...@schweigstill.de>.
Hello!

Toby Thain schrieb:
> This is a flaw of *all* RAID-1 systems, software or hardware  
> (excepting the few in the high end that implement checksums).

No, it is only related to some OS which do some performance
optimizations. On Solaris one can set md_mirror:md_mirror_wow_flg=0x20
which disables performance optimization.

> It also risks integrity if the system suffers a sudden interruption  
> during writing (e.g. powerfail, panic, etc).

That is another problem which can't be completely solved. But this
has nearly nothing to do with normal operation.

>> Nowadays these WOW problems "should" be non-existing but I am not
>> dure since when Linux soft-RAID handles them correctly.
> 
> There is no solution; the flaw is inherent in the design. *Non- 
> checksummed* RAID-1 cannot determine which side of the mirror holds  
> valid data.

They just can't determine the valid mirror side in the case of an
interruption or for computing clusters which access the same disks.

And for normal use it is not important that both sides of the mirror
contain the same data also on short timescales. It is sufficient that
read operations on the same sector/block return consistent data.

> Sun's ZFS solves this problem entirely.

ZFS solves many problems, but the WOW problem was/is mainly present on
Solaris systems using the normal Sun Volume Manager with UFS.

Regards
Andreas Schweigstill

-- 
Dipl.-Phys. Andreas Schweigstill
Schweigstill IT | Embedded Systems
Schauenburgerstraße 116, D-24118 Kiel, Germany
Phone: (+49) 431 53035-435, Fax: (+49) 431 53035-436
Mobile: (+49) 171 6921973, Web: http://www.schweigstill.de/

------------------------------------------------------
http://subversion.tigris.org/ds/viewMessage.do?dsForumId=1065&dsMessageId=1183336

To unsubscribe from this discussion, e-mail: [users-unsubscribe@subversion.tigris.org].


Re: 2 users working on same file caused a change to be reverted rather than merged

Posted by Toby Thain <to...@telegraphics.com.au>.
On 16-Feb-09, at 7:22 AM, Andreas Schweigstill wrote:

> Hello!
>
> Jan Hendrik schrieb:
>> Funny thing though: it only happens when the files already exist
>> on the Linkstation, any "new" file invariably gets the timestamp
>> it has on the local machine.
>
> There are several ways to write to an existing file. Many editors
> use a scheme where they first rename the old file to a temporary
> file, then they write their buffer contents to a new file which gets
> the old filename, and finally removes the old file with the temporary
> filename. This way it is assured that the file can be completely
> erased because of a program crash or network problem. In such cases
> the user would at least find the old file. When copying the original
> file attributes after successfully writing the new file probably
> also the old timestamp gets copied. This can either be related to
> some caching issues or just a simple programming error.

In this connection, Apple's HFS has an atomic file swap operation.

>
> On some RAID systems there is also a "feature" which is called
> WOW (write on write) problem. When a block gets twice in a short
> time to a mirrored disk it can happen that write operation 1 gets
> executed on mirropr 1 first and on mirror 2 afterwards. Write op. 2
> gets executed the other way. Due to this race condition there
> can be inconsistent states on the mirrors. Similar race conditions
> on mirrored disks can occur when write and read operations are
> interleaved.

This is a flaw of *all* RAID-1 systems, software or hardware  
(excepting the few in the high end that implement checksums).

It also risks integrity if the system suffers a sudden interruption  
during writing (e.g. powerfail, panic, etc).


>
> Nowadays these WOW problems "should" be non-existing but I am not
> dure since when Linux soft-RAID handles them correctly.

There is no solution; the flaw is inherent in the design. *Non- 
checksummed* RAID-1 cannot determine which side of the mirror holds  
valid data.

Sun's ZFS solves this problem entirely.

--Toby

>
> Regards
> Andreas Schweigstill
>
> -- 
> Dipl.-Phys. Andreas Schweigstill
> Schweigstill IT | Embedded Systems
> Schauenburgerstraße 116, D-24118 Kiel, Germany
> Phone: (+49) 431 53035-435, Fax: (+49) 431 53035-436
> Mobile: (+49) 171 6921973, Web: http://www.schweigstill.de/
>
> ------------------------------------------------------
> http://subversion.tigris.org/ds/viewMessage.do? 
> dsForumId=1065&dsMessageId=1170400
>
> To unsubscribe from this discussion, e-mail: [users- 
> unsubscribe@subversion.tigris.org].

------------------------------------------------------
http://subversion.tigris.org/ds/viewMessage.do?dsForumId=1065&dsMessageId=1179050

To unsubscribe from this discussion, e-mail: [users-unsubscribe@subversion.tigris.org].


Re: 2 users working on same file caused a change to be reverted rather than merged

Posted by Andreas Schweigstill <an...@schweigstill.de>.
Hello!

Jan Hendrik schrieb:
> Funny thing though: it only happens when the files already exist
> on the Linkstation, any "new" file invariably gets the timestamp
> it has on the local machine.

There are several ways to write to an existing file. Many editors
use a scheme where they first rename the old file to a temporary
file, then they write their buffer contents to a new file which gets
the old filename, and finally removes the old file with the temporary
filename. This way it is assured that the file can be completely
erased because of a program crash or network problem. In such cases
the user would at least find the old file. When copying the original
file attributes after successfully writing the new file probably
also the old timestamp gets copied. This can either be related to
some caching issues or just a simple programming error.

On some RAID systems there is also a "feature" which is called
WOW (write on write) problem. When a block gets twice in a short
time to a mirrored disk it can happen that write operation 1 gets
executed on mirropr 1 first and on mirror 2 afterwards. Write op. 2
gets executed the other way. Due to this race condition there
can be inconsistent states on the mirrors. Similar race conditions
on mirrored disks can occur when write and read operations are
interleaved.

Nowadays these WOW problems "should" be non-existing but I am not
dure since when Linux soft-RAID handles them correctly.

Regards
Andreas Schweigstill

-- 
Dipl.-Phys. Andreas Schweigstill
Schweigstill IT | Embedded Systems
Schauenburgerstraße 116, D-24118 Kiel, Germany
Phone: (+49) 431 53035-435, Fax: (+49) 431 53035-436
Mobile: (+49) 171 6921973, Web: http://www.schweigstill.de/

------------------------------------------------------
http://subversion.tigris.org/ds/viewMessage.do?dsForumId=1065&dsMessageId=1170400

To unsubscribe from this discussion, e-mail: [users-unsubscribe@subversion.tigris.org].


Re: Tapestry 3 DirectLink and stateful (Thanks a lot!)

Posted by Mike Davis <mi...@lookhere.co.za>.
Hi,

Thanks for that - it solved my problem neatly ...

I'd also like to say thanks to the list - great to see a big community
that helps out so quickly and politely!

Thnks
Mike

On Thu, 03 May 2007 15:46:18 +0100
Richard Kirby <rb...@capdm.com> wrote:

> Hi Mike,
> 
> I think you will find that the cookie used for session tracking is 
> actually generated by the servlet engine 
> (Tomcat/Jetty/Weblogic/whatever) and nothing to do with Tapestry.
> 
> However, since cookies are nothing but HTTP header lines, you could 
> always use a Filter to hack the cookie header line. Alternatively, if 
> you are using mod_proxy to hook your apache to your servlet engine,
> look into using ProxyPassReverseCookiePath and
> ProxyPassReverseCookieDomain.
> 
> Cheers
> 
> Richard
> 
> Mike Davis wrote:
> > Hi,
> >
> > As it turns out, the problem is actually that I've used Apache's
> > mod_rewrite to transparently eliminate the context part of the path
> > ('/xyz/app.htm' => '/app.htm') and obviously Tapestry uses the
> > context as part of the cookie's path ... 
> >
> > Is there a way to setup Tapestry so that it will set up the cookie
> > path without the context (or with an alternate path)? I have other
> > applications on the same app server, so I still need to keep the
> > apps in seperate contexts.
> >
> > I have taken a look in the code and I suppose I could simply hack it
> > until it works, but obviously I'm hoping that it's possible just
> > using a config file.
> >
> > Thanks
> > Mike Davis
> >
> >
> > On Thu, 3 May 2007 08:47:10 +0200
> > Mike Davis <md...@lookhere.co.za> wrote:
> >
> >   
> >> Hi,
> >>
> >> The problem is that I have a page that I'd like users to be able to
> >> access with or without a session. This works fine, but calling
> >> getVisit() on that page returns null whether or not the user
> >> previously had an accessible visit object. Once a user has been to
> >> that page, calls to getVisit() from *any other page* also return
> >> null, again regardless of whether a visit object was previously
> >> accessible.
> >>
> >> Thanks
> >> Mike Davis
> >>
> >>
> >>
> >> On Thu, 3 May 2007 04:50:29 +0300
> >> "Andreas Andreou" <an...@di.uoa.gr> wrote:
> >>
> >>     
> >>> not, that's not correct... session is not invalidated when
> >>> clicking on a DirectLink having the stateful flag set to false
> >>>
> >>> What exactly is the problem you're facing?
> >>>
> >>> On 5/3/07, Mike Davis <md...@lookhere.co.za> wrote:
> >>>       
> >>>> Hi all,
> >>>>
> >>>> I've been asked to do some work on a Tapestry 3 application
> >>>> (unfortunately I can't upgrade it!) and I'm struggling with a
> >>>> DirectLink/session issue. I would like to allow our users to
> >>>> access certain pages without session, yet still be able to return
> >>>> to pages that require a session, without losing any stored state.
> >>>>
> >>>> Is it possible to create a Tapestry app that allows a user to do
> >>>> the following?
> >>>> - open a page that creates and uses a visit object
> >>>> - click a DirectLink with the stateful flag set to false
> >>>> - go back to the first page and use the original visit object
> >>>> again
> >>>>
> >>>> Is it correct to say that once a DirectLink is clicked with the
> >>>> stateful flag set to false, any session data associated with that
> >>>> user's session is invalidated?
> >>>>
> >>>> Thanks
> >>>> Mike Davis
> >>>>
> >>>> ---------------------------------------------------------------------
> >>>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
> >>>> For additional commands, e-mail: users-help@tapestry.apache.org
> >>>>
> >>>>
> >>>>         
> >>>       
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
> >> For additional commands, e-mail: users-help@tapestry.apache.org
> >>
> >>     
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
> > For additional commands, e-mail: users-help@tapestry.apache.org
> >   
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
> For additional commands, e-mail: users-help@tapestry.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
For additional commands, e-mail: users-help@tapestry.apache.org


Re: Tapestry 3 DirectLink and stateful

Posted by Richard Kirby <rb...@capdm.com>.
Hi Mike,

I think you will find that the cookie used for session tracking is 
actually generated by the servlet engine 
(Tomcat/Jetty/Weblogic/whatever) and nothing to do with Tapestry.

However, since cookies are nothing but HTTP header lines, you could 
always use a Filter to hack the cookie header line. Alternatively, if 
you are using mod_proxy to hook your apache to your servlet engine, look 
into using ProxyPassReverseCookiePath and ProxyPassReverseCookieDomain.

Cheers

Richard

Mike Davis wrote:
> Hi,
>
> As it turns out, the problem is actually that I've used Apache's
> mod_rewrite to transparently eliminate the context part of the path
> ('/xyz/app.htm' => '/app.htm') and obviously Tapestry uses the context
> as part of the cookie's path ... 
>
> Is there a way to setup Tapestry so that it will set up the cookie path
> without the context (or with an alternate path)? I have other
> applications on the same app server, so I still need to keep the apps
> in seperate contexts.
>
> I have taken a look in the code and I suppose I could simply hack it
> until it works, but obviously I'm hoping that it's possible just using
> a config file.
>
> Thanks
> Mike Davis
>
>
> On Thu, 3 May 2007 08:47:10 +0200
> Mike Davis <md...@lookhere.co.za> wrote:
>
>   
>> Hi,
>>
>> The problem is that I have a page that I'd like users to be able to
>> access with or without a session. This works fine, but calling
>> getVisit() on that page returns null whether or not the user
>> previously had an accessible visit object. Once a user has been to
>> that page, calls to getVisit() from *any other page* also return null,
>> again regardless of whether a visit object was previously accessible.
>>
>> Thanks
>> Mike Davis
>>
>>
>>
>> On Thu, 3 May 2007 04:50:29 +0300
>> "Andreas Andreou" <an...@di.uoa.gr> wrote:
>>
>>     
>>> not, that's not correct... session is not invalidated when clicking
>>> on a DirectLink having the stateful flag set to false
>>>
>>> What exactly is the problem you're facing?
>>>
>>> On 5/3/07, Mike Davis <md...@lookhere.co.za> wrote:
>>>       
>>>> Hi all,
>>>>
>>>> I've been asked to do some work on a Tapestry 3 application
>>>> (unfortunately I can't upgrade it!) and I'm struggling with a
>>>> DirectLink/session issue. I would like to allow our users to
>>>> access certain pages without session, yet still be able to return
>>>> to pages that require a session, without losing any stored state.
>>>>
>>>> Is it possible to create a Tapestry app that allows a user to do
>>>> the following?
>>>> - open a page that creates and uses a visit object
>>>> - click a DirectLink with the stateful flag set to false
>>>> - go back to the first page and use the original visit object
>>>> again
>>>>
>>>> Is it correct to say that once a DirectLink is clicked with the
>>>> stateful flag set to false, any session data associated with that
>>>> user's session is invalidated?
>>>>
>>>> Thanks
>>>> Mike Davis
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
>>>> For additional commands, e-mail: users-help@tapestry.apache.org
>>>>
>>>>
>>>>         
>>>       
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
>> For additional commands, e-mail: users-help@tapestry.apache.org
>>
>>     
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
> For additional commands, e-mail: users-help@tapestry.apache.org
>   


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@tapestry.apache.org
For additional commands, e-mail: users-help@tapestry.apache.org


Get values from Postgres database

Posted by Markus Blasl <bl...@fzi.de>.
Hello,

i want to set up an application with a postgres database.
It's my first contact with cocoon so far, maybe someone can help.
What I did so far:
Installed  Postgres, Apache, Tomcat and Cocoon.
Everything is working fine, and I also played a little bit with the 
cocoon features.

Now I  downloaded the JDBC-Driver for Postgres and put it into cocoons 
lib dir.
then edited web.xml:

<init-param>
     <param-name>load-class</param-name>
     <param-value>
       <!-- For IBM WebSphere:
       com.ibm.servlet.classloader.Handler -->

       <!-- For Database Driver: -->
       org.hsqldb.jdbcDriver
   org.postgresql.Driver
       <!-- For parent ComponentManager sample:
       org.apache.cocoon.samples.parentcm.Configurator
       -->
     </param-value>
   </init-param>

And I also edited cocoon.xconf:

<datasources>
   <jdbc name="films">
     <pool-controller min="5" max="10"/>
   <dburl>jdbc:postgresql:test://localhost:5432</dburl>
     <user>blasl</user>
     <password/>
   </jdbc>
 </datasources>

If i put there something other then test (my database) thn on tty0 comes 
up DATABASE ... not found. So I guess that everything is set up ok so far.

Now I created a new directory under mount and put an xml-file in there:

<?xml version="1.0" encoding="ISO-8859-1"?>
<xsp:page language="java"
xmlns:xsp="http://apache.org/xsp"
xmlns:sql="http://apache.org/cocoon/SQL/v2"
 >
<page>
 <sql:connection>
   <sql:pool>films</sql:pool>
   <transactions>
     <sql:execute-query>
     <sql:query>
       select * from films
     </sql:query>
     <sql:results>
            <sql:row-results>
              <trans>
                <id>
                  <sql:get-string column="code"/>
                </id>
        <my_title>
          <sql:get-string column="title"/>
        </my_title>
                <date1>
                  <sql:get-date column="date_prod"
                     format="dd. MMM. yyyy"/>
                </date1>
                <date2>
                  <sql:get-date column="date_in"
                     format="dd. MMM. yyyy"/>
                </date2>
                <account>
                  <sql:get-string column="kind"/>
                </account>
              </trans>
            </sql:row-results>
          </sql:results>
          <sql:no-results>
            No records found...
          </sql:no-results>
          <sql:error-results>
            SQL Exception: <sql:get-message/>
          </sql:error-results>
        </sql:execute-query>
      </transactions>
    </sql:connection>
  </page>
  </xsp:page>

I also have my own sitemap.xmap file in there:

<?xml version="1.0" encoding="iso-8859-1"?>
<map:sitemap xmlns:map="http://apache.org/cocoon/sitemap/1.0">

   <!-- use the standard components -->
   <map:components>
       <map:generators default="file"/>
       <map:transformers default="xslt"/>
       <map:readers default="resource"/>
       <map:serializers default="html"/>
       <map:selectors default="browser"/>
       <map:matchers default="wildcard"/>
   </map:components>
       <map:pipelines>
       <map:pipeline>
           <!-- respond to *.html requests with
                our docs processed by doc2html.xsl -->
           <map:match pattern="*.html">
               <map:generate src="{1}.xml"/>
               <map:transform src="doc2html.xsl"/>
               <map:serialize type="html"/>
           </map:match>
         <map:match pattern="sql/*">

    <map:transform type="sql">
    <map:parameter name="use-connection" value="films"/>
    </map:transform>
    <map:transform src="stylesheets/simple-sql2html.xsl"/>
    <map:serialize/>
   </map:match>
   </map:pipeline>
   </map:pipelines>
</map:sitemap>

This is the table of my database:
             Table "public.films"
 Column   |          Type           | Modifiers
-----------+-------------------------+-----------
code      | character(5)            | not null
title     | character varying(40)   | not null
date_prod | date                    |
date_in   | date                    |
kind      | character(15)           |
len       | interval hour to minute |
Indexes: firstkey primary key btree (code)



When I now try to get the html-output of the xml file, all i get is this:

films select * from films No records found... SQL Exception:


Please, if anyone has an idea how to fix  it, I'd really be happy.

Thanks in advance,

Markus




---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-users-unsubscribe@xml.apache.org
For additional commands, e-mail: cocoon-users-help@xml.apache.org


Re: OOo and ZipArchive serializer

Posted by Upayavira <uv...@upaya.co.uk>.
Let me know what version of Cocoon you're using and I'll try to send a correct patch.

Upayavira

On 12 Jun 2003 at 19:49, Georges Roux wrote:

>  Sorry, the patch fail I think, it's better to wait some time a next
> version, to have a Zip serializer more efficient.
> 
> patch < ZipSerializer.patch
> patching file ZipArchiveSerializer.java
> Hunk #1 FAILED at 111.
> Hunk #2 FAILED at 288.
> 2 out of 2 hunks FAILED -- saving rejects to file 
> ZipArchiveSerializer.java.rej
> 
> 
> Georges
> 
> Upayavira wrote:
> 
> >The following section of this message contains a file attachment
> >prepared for transmission using the Internet MIME message format. If
> >you are using Pegasus Mail, or any another MIME-compliant system, you
> >should be able to save it or view it from within your mailer. If you
> >cannot, please ask your system administrator for assistance.
> >
> >   ---- File information -----------
> >     File:  ZipSerializer.patch
> >     Date:  12 Jun 2003, 12:29
> >     Size:  1968 bytes.
> >     Type:  Unknown
> >  
> >
> >---------------------------------------------------------------------
> >---
> >
> >---------------------------------------------------------------------
> >To unsubscribe, e-mail: cocoon-users-unsubscribe@xml.apache.org For
> >additional commands, e-mail: cocoon-users-help@xml.apache.org
> >
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: cocoon-users-unsubscribe@xml.apache.org
For additional commands, e-mail: cocoon-users-help@xml.apache.org


Re: OOo and ZipArchive serializer

Posted by Georges Roux <ge...@pacageek.org>.
 Sorry, the patch fail I think, it's better to wait some time a next 
version, to have a Zip serializer more efficient.

patch < ZipSerializer.patch
patching file ZipArchiveSerializer.java
Hunk #1 FAILED at 111.
Hunk #2 FAILED at 288.
2 out of 2 hunks FAILED -- saving rejects to file 
ZipArchiveSerializer.java.rej


Georges

Upayavira wrote:

>The following section of this message contains a file attachment
>prepared for transmission using the Internet MIME message format.
>If you are using Pegasus Mail, or any another MIME-compliant system,
>you should be able to save it or view it from within your mailer.
>If you cannot, please ask your system administrator for assistance.
>
>   ---- File information -----------
>     File:  ZipSerializer.patch
>     Date:  12 Jun 2003, 12:29
>     Size:  1968 bytes.
>     Type:  Unknown
>  
>
>------------------------------------------------------------------------
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: cocoon-users-unsubscribe@xml.apache.org
>For additional commands, e-mail: cocoon-users-help@xml.apache.org
>

Re: Spring integration

Posted by Bruno Dusausoy <bd...@yp5.be>.
On Tue, 15 Jun 2010 16:05:04 -0700, David Blevins <da...@visi.com>
wrote:

[...]
> 
> Sorry for responses on the list being a bit slow, we're working quite
> heavily on EJB 3.1 at the moment.
> 
No problem, it's way more important than my question :).
 
> Maybe if you could share some of the motivation why you need Spring to
> hand so much of the transactional work.
> 
Well, as I said, it's because we used to use intensively Carbon5 Test
Support (http://code.google.com/p/c5-test-support/).
It really eases our database testing, allowing for painless database
setup/cleanup, transaction rollback after tests, ...
However, it's based on Spring's testing hierarchy.
That's the reason why I'd like to use Spring/OpenEJB integration.
I guess I'll have to recreate a similar testing hierarchy for OpenEJB.
Unfortunately I'm not as familiar with EJB/OpenEJB as with Spring ... yet
;)

> Does the application that runs in Glassfish use Spring created
> EntityManagers and DataSources rather than the ones created by
Glassfish?
>
No, in production it's all handled by Glassfish itself : no Spring
involved.
 
> If the goal is simply to do things just as would be done in Glassfish
and
> be able to write transactional tests, then maybe check out this example:
> 
>  
http://svn.apache.org/repos/asf/openejb/trunk/openejb3/examples/transaction-rollback/
>
I'll check it out. Thanks.
 
> Maybe also this one as it uses the same persistence provider as
Glassfish
> uses:
> 
>  
http://svn.apache.org/repos/asf/openejb/trunk/openejb3/examples/jpa-eclipselink/
> 
> As well it is possible to supply an entirely different persistence.xml
for
> just testing purposes:
> 
>   http://openejb.apache.org/3.0/alternate-descriptors.html
> 
> Hope some of this helps!
>
Sure, it definitely helped. 
Thanks a lot and keep up the excellent work !

Btw, I've read your call for contribution, and if I have enough spare-time
I will definitely help.
But don't take it for granted ;).
 
-- 
Bruno Dusausoy
YP5 Software
--
Pensez environnement : limitez l'impression de ce mail.
Please don't print this e-mail unless you really need to.

Re: Spring integration

Posted by David Blevins <da...@visi.com>.
On Jun 4, 2010, at 12:42 AM, Bruno Dusausoy wrote:

> On Thu, 03 Jun 2010 15:44:09 +0200, Bruno Dusausoy <bd...@yp5.be>
> wrote:
> 
> [...]
> 
> Ok, so I'm thinking of this sequence of initialization :
> 
> - Declaring a DBCP DataSource within Spring (may be not needed since
> OpenEJB also uses DBCP);
> - Creating an EntityManager with LocalContainerEntityManagerFactoryBean,
> so I can tweak the creation of the former;
> - Launching OpenEJB;
> - Getting the transaction manager from OpenEJB and link it to Spring's
> JtaTransactionManager thanks to its reference;
> - Declaring <tx:annotation-driven/> so I can use @Transactional
> annotations in my classes;
> - Injecting the DataSource declared above into OpenEJB (is it needed since
> the EntityManager will have a reference to it ?);
> - Injecting the EntityManager into the EJB's who need it;
> 
> Would this work, and as asked before, will the whole transaction process
> be coherent ?

Sorry for responses on the list being a bit slow, we're working quite heavily on EJB 3.1 at the moment.

Maybe if you could share some of the motivation why you need Spring to hand so much of the transactional work.

Does the application that runs in Glassfish use Spring created EntityManagers and DataSources rather than the ones created by Glassfish?

If the goal is simply to do things just as would be done in Glassfish and be able to write transactional tests, then maybe check out this example:

  http://svn.apache.org/repos/asf/openejb/trunk/openejb3/examples/transaction-rollback/

Maybe also this one as it uses the same persistence provider as Glassfish uses:

  http://svn.apache.org/repos/asf/openejb/trunk/openejb3/examples/jpa-eclipselink/

As well it is possible to supply an entirely different persistence.xml for just testing purposes:

  http://openejb.apache.org/3.0/alternate-descriptors.html

Hope some of this helps!


-David


Request Parameters

Posted by Ragia <ra...@asset.com.eg>.
 
Hello All,

	Working on a custom store, I need to send my own parameters through
the request object. Just need to know how I can send these values to my
custom store implementations. Should it be through NodeRevisionDescriptor or
is it possible to access the request object from inside the store
implementation??

Any ideas???

Thanx in advance :)

Rooja


---------------------------------------------------------------------
To unsubscribe, e-mail: slide-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: slide-user-help@jakarta.apache.org


Re: Where To Start

Posted by Andreas Probst <an...@gmx.net>.
I think both our replies are pretty complementary.

:-)

On 22 Aug 2004 at 0:13, James Mason wrote:

> I'm thinking this reply makes more sense. Perhaps you should ignore 
> mine. It's late and I need sleep :).
> 
> Thanks Andreas.
> 
> -James
> 
> Andreas Probst wrote:
> > On 21 Aug 2004 at 3:26, Jack Kada wrote:
> > 
> > 
> >>Andreas,
> >>
> >>Thanks so much for getting back to me.  I see what you
> >>mean about the front page and things are looking
> >>better.
> >>
> >>If you dont mind - I kind of gave up using Slide in
> >>favour of Apache webDav module with Apache2.  Do you
> >>have any thoughts on this.  This was really because of
> >>the lack of documentation. 
> > 
> > 
> > I don't mind.
> > I don't know about the Apache stuff.
> > 
> > 
> >> If i am able to get help from this mailing list
> >>regarding setting up slide i would be more then happy
> >>to contribute to Slide by writing a quick start
> >>tutorial so that future developers dont have to have a
> >>hard time using this.
> >>
> >>Do you mind telling me where the paths for the files
> >>are kept.  In the domain.xml you set up scope to be
> >>"/files" (say)  where is that /files actually kept?
> > 
> > 
> > See the <store> definition. There is a <parameter 
> > name="rootpath"> and a <parameter name="workpath">.
> > They are set to store/metadata and work/metadata per default. 
> > Set it to any (full) path you like to control the actual 
> > location. In the default setting they go into the current work 
> > directory.
> > 
> > 
> >>Furthermore, how to create several folders for each
> >>project.  i know how to use Jass module and i dont
> >>need any help with the Java programming side.  Just
> >>setting things up is causing me problems
> > 
> > 
> > You can create folders inside the Domain.xml. Use this for your 
> > standard folders which should always be there. See the 
> > <data><objectnode> definition.
> > 
> > Of course you can create directories during run-time using the 
> > WebDAV MKCOL. Slide contains a WebDAV client library. But you 
> > can also use MS Webfolders or whatever WebDAV client you like.
> > 
> > 
> > Regards, Andreas
> > 
> > 
> >>Many Thanks indeed
> >>
> >>--- Andreas Probst <an...@gmx.net> wrote:
> >>
> >>
> >>>Hi Jack,
> >>>
> >>>see
> >>>
> >>>/home/cvspublic/jakarta-
> >>>
> >>
> >>slide/src/webdav/server/org/apache/slide/webdav/WebdavServlet.jav
> >>
> >>>a,v 1.63
> >>>2004/08/05 14:43:34 dflorey Exp $ public void
> >>>init():
> >>>        if (directoryBrowsing) {
> >>>            directoryIndexGenerator =
> >>>                new DirectoryIndexGenerator
> >>>                (token, 
> >>>(WebdavServletConfig)getServletConfig());
> >>>        }
> >>>
> >>>$Header:
> >>>/home/cvspublic/jakarta-
> >>>
> >>
> >>slide/src/webdav/server/org/apache/slide/webdav/util/DirectoryInd
> >>
> >>>exGenerator
> >>>.java,v 1.8 2004/08/05 14:43:31 dflorey Exp $ public
> >>>void 
> >>>generate(HttpServletRequest req,
> >>>HttpServletResponse res)
> >>>
> >>>It might be you can configure your own
> >>>DirectoryIndexGenerator 
> >>>to use in web.xml.
> >>>
> >>>Regards,
> >>>
> >>>Andreas
> >>>
> >>>
> >>>On 21 Aug 2004 at 1:26, Jack Kada wrote:
> >>>
> >>>
> >>>>Developers,
> >>>>
> >>>>I spent all of Friday reading amiling lists but
> >>>>couldnt find the answer to the following:
> >>>>
> >>>>How can i customise the front page so i can
> >>>
> >>>include
> >>>
> >>>>image and background?  I read about and know how
> >>>
> >>>to
> >>>
> >>>>use JSP's but where is the JavaBean to get the
> >>>
> >>>listing
> >>>
> >>>>of folders available at any one time.
> >>>>
> >>>>Many Thanks
> >>>>
> >>>>
> >>>
> >>>
> > 
> > 
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: slide-user-unsubscribe@jakarta.apache.org
> > For additional commands, e-mail: slide-user-help@jakarta.apache.org
> > 
> > 
> > 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: slide-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: slide-user-help@jakarta.apache.org
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: slide-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: slide-user-help@jakarta.apache.org


Re: Where To Start

Posted by James Mason <ma...@apache.org>.
I'm thinking this reply makes more sense. Perhaps you should ignore 
mine. It's late and I need sleep :).

Thanks Andreas.

-James

Andreas Probst wrote:
> On 21 Aug 2004 at 3:26, Jack Kada wrote:
> 
> 
>>Andreas,
>>
>>Thanks so much for getting back to me.  I see what you
>>mean about the front page and things are looking
>>better.
>>
>>If you dont mind - I kind of gave up using Slide in
>>favour of Apache webDav module with Apache2.  Do you
>>have any thoughts on this.  This was really because of
>>the lack of documentation. 
> 
> 
> I don't mind.
> I don't know about the Apache stuff.
> 
> 
>> If i am able to get help from this mailing list
>>regarding setting up slide i would be more then happy
>>to contribute to Slide by writing a quick start
>>tutorial so that future developers dont have to have a
>>hard time using this.
>>
>>Do you mind telling me where the paths for the files
>>are kept.  In the domain.xml you set up scope to be
>>"/files" (say)  where is that /files actually kept?
> 
> 
> See the <store> definition. There is a <parameter 
> name="rootpath"> and a <parameter name="workpath">.
> They are set to store/metadata and work/metadata per default. 
> Set it to any (full) path you like to control the actual 
> location. In the default setting they go into the current work 
> directory.
> 
> 
>>Furthermore, how to create several folders for each
>>project.  i know how to use Jass module and i dont
>>need any help with the Java programming side.  Just
>>setting things up is causing me problems
> 
> 
> You can create folders inside the Domain.xml. Use this for your 
> standard folders which should always be there. See the 
> <data><objectnode> definition.
> 
> Of course you can create directories during run-time using the 
> WebDAV MKCOL. Slide contains a WebDAV client library. But you 
> can also use MS Webfolders or whatever WebDAV client you like.
> 
> 
> Regards, Andreas
> 
> 
>>Many Thanks indeed
>>
>>--- Andreas Probst <an...@gmx.net> wrote:
>>
>>
>>>Hi Jack,
>>>
>>>see
>>>
>>>/home/cvspublic/jakarta-
>>>
>>
>>slide/src/webdav/server/org/apache/slide/webdav/WebdavServlet.jav
>>
>>>a,v 1.63
>>>2004/08/05 14:43:34 dflorey Exp $ public void
>>>init():
>>>        if (directoryBrowsing) {
>>>            directoryIndexGenerator =
>>>                new DirectoryIndexGenerator
>>>                (token, 
>>>(WebdavServletConfig)getServletConfig());
>>>        }
>>>
>>>$Header:
>>>/home/cvspublic/jakarta-
>>>
>>
>>slide/src/webdav/server/org/apache/slide/webdav/util/DirectoryInd
>>
>>>exGenerator
>>>.java,v 1.8 2004/08/05 14:43:31 dflorey Exp $ public
>>>void 
>>>generate(HttpServletRequest req,
>>>HttpServletResponse res)
>>>
>>>It might be you can configure your own
>>>DirectoryIndexGenerator 
>>>to use in web.xml.
>>>
>>>Regards,
>>>
>>>Andreas
>>>
>>>
>>>On 21 Aug 2004 at 1:26, Jack Kada wrote:
>>>
>>>
>>>>Developers,
>>>>
>>>>I spent all of Friday reading amiling lists but
>>>>couldnt find the answer to the following:
>>>>
>>>>How can i customise the front page so i can
>>>
>>>include
>>>
>>>>image and background?  I read about and know how
>>>
>>>to
>>>
>>>>use JSP's but where is the JavaBean to get the
>>>
>>>listing
>>>
>>>>of folders available at any one time.
>>>>
>>>>Many Thanks
>>>>
>>>>
>>>
>>>
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: slide-user-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: slide-user-help@jakarta.apache.org
> 
> 
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: slide-user-unsubscribe@jakarta.apache.org
For additional commands, e-mail: slide-user-help@jakarta.apache.org


RE: Help with SUM function

Posted by Jim Bury <jb...@SciQuest.com>.
That comes so close but I think I will still run into trouble. What I have is (FC=Fulfillment Center):

   A             B
1 Supplier #1   
2   FC1 Item #1  1.00
3   FC1 Item #2  2.00

4   FC1 Total    3.00

5   FC2 Item #1  4.00  	

6   FC2 Total    4.00
  
7 Supp #1 Total  7.00

8 Supplier #2   
9   FC1 Item #1  3.00
10  FC1 Item #2  2.00

11  FC1 Total    5.00

12  FC2 Item #1  6.00

13  FC2 Total    6.00
  
14 Supp #1 Total 11.00

15 Total (All)   18.00

What I end up with is a formula like this in my Grand Total: =SUM(G10:G11,G20:G22...), the sheet has 3 nested forEach's and I fear I'll run into the same issue even with your approach. I think it is because there are text column headers for each section. I may need to figure out a way to tuck the amounts into columns someplace else and get a sum from there but the loops are still creating gaps...

Thanks,
Jim

-----Original Message-----
From: Michael Zalewski [mailto:zalewski@optonline.net] 
Sent: Tuesday, August 31, 2010 3:13 PM
To: user@poi.apache.org
Subject: Re: Help with SUM function

Sounds like you are having a column with Subtotals and Grand Totals. The SUM
function that yields your grand total does not need to pick out ranges. Just run
the SUM function over the entire column

For example
   A             B
 1 Supplier #1   1.00 
 2               2.00
 3 Subtotal      @SUM(B1:B2)
 4 Supplier #2   3.00
 5               4.00
 6 Subtotal      @SUM(B4:B5)
 7 GRAND TOTAL   @SUM(B1:B5)

You would think that the GRAND TOTAL would be double the correct result, because
it looks like the formula includes the subtotals at B3 and B6. But such is not
the case. The SUM function will ignore cells which contain subtotals from cells
already included in the SUM.

I'm not sure that the POI Formula Evaluator behaves this way. But Excel does.




---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@poi.apache.org
For additional commands, e-mail: user-help@poi.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@poi.apache.org
For additional commands, e-mail: user-help@poi.apache.org


RE: Questions about client side API for versioning and Labelling.

Posted by Julian Reschke <ju...@gmx.de>.
Note that the Label: header is deprecated and will be removed in a future
revision of the protocol.

Check out the new DAV:labeled-version report [1] for an alternative
approach.

Julian


[1]
<http://www.webdav.org/deltav/protocol/draft-ietf-webdav-versioning-xx.6.htm
#_Toc13053962>

--
<green/>bytes GmbH -- http://www.greenbytes.de -- tel:+492512807760

> -----Original Message-----
> From: Ingo Brunberg [mailto:ib@fiz-chemie.de]
> Sent: Thursday, January 09, 2003 7:03 PM
> To: slide-user@jakarta.apache.org
> Subject: Re: Questions about client side API for versioning and
> Labelling.
>
>
> I haven't done this either, but according to the Delta-V specs you
> should be able to retrieve a specific version of a document by
> including the version number or label in a GET request. I don't know
> if Slide supports this, but the client API definitely provides no
> direct implementation of such an extended getMethod.
>
> Regards,
> Ingo
>
> >Excuse me, I can't tell you. I haven't done this.
> >
> >Andreas
> >
> >
> >On 9 Jan 2003 at 7:51, Son Singh wrote:
> >
> >>
> >> Hi Andreas,
> >> Thanks for your help.
> >> Now I can get all the versions of a given document as well as
> >> entire history of a given document. Still have a question
> >> regarding getting document based on label. If label name is
> >> "PRODUCTION", how do I get the document corresponding to
> >> "PRODUCTION". Thanks, -Son.
> >>  Andreas Probst <an...@gmx.net> wrote:I forgot one thing:
> >>  There is no ONE VCR that belongs to a VR.
> >> There could be several VCR's which have the same VHR and VR's.
> >>
> >> Happy New Year!
> >>
> >> Andreas
> >>
> >>
> >> On 31 Dec 2002 at 10:41, Andreas Probst wrote:
> >>
> >> > Hi Son,
> >> >
> >> > I'd recommend the DeltaV spec
> >> > (http://www.ietf.org/rfc/rfc3253.txt).
> >> >
> >> > Please, see intermixed.
> >> >
> >> >
> >> > On 30 Dec 2002 at 13:18, Son Singh wrote:
> >> >
> >> > >
> >> > > Hi,
> >> > >
> >> > > I have been using Slide 1.0.16 from sometime. I have started
> >> > > using Slide 2.x recently. I was successfully able to
> >> > > configure slide for auto versioning. Now the question comes
> >> > > here.
> >> > >
> >> > > 1. How do I get all the versions for a given document ? Is
> >> > > there a client side API which can accomplish this ?
> >> >
> >> > For each version-controlled resource (VCR) there is a directory
> >> > (version history resource - VHR) below /history. The first VCR
> >> > gets the 1, the second the 2 and so on. The version resource
> >> > (VR) gets number 1.0, 1.1 and so on. So the first VR of the
> >> > first VCR is identified by the URI /history/1/1.0.
> >> >
> >> > One of the following two properties DAV:checked-in or
> >> > DAV:checked-out is set on each VCR. It gives you the path to
> >> > the current VR of the VCR. (pages 21-23 of DeltaV spec)
> >> >
> >> > >
> >> > > 2. How do I get different labels associated to a document ?
> >> > >
> >> > > 3. How to retrieve the document of a given version number /
> >> > > label.
> >> > Use the DAV:locate-by-history Report (DeltaV spec p43)
> >> >
> >> > >
> >> > > Any help / pointers will be highly appreciated.
> >> > >
> >> > > Thanks in advance,
> >> > >
> >> > > Son.
> >> > >
> >> > >
> >> >
> >> > Andreas
>
>
> --
> To unsubscribe, e-mail:
> <ma...@jakarta.apache.org>
> For additional commands, e-mail:
> <ma...@jakarta.apache.org>
>


--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: Questions about client side API for versioning and Labelling.

Posted by Ingo Brunberg <ib...@fiz-chemie.de>.
I haven't done this either, but according to the Delta-V specs you
should be able to retrieve a specific version of a document by
including the version number or label in a GET request. I don't know
if Slide supports this, but the client API definitely provides no
direct implementation of such an extended getMethod.

Regards,
Ingo

>Excuse me, I can't tell you. I haven't done this.
>
>Andreas
>
>
>On 9 Jan 2003 at 7:51, Son Singh wrote:
>
>> 
>> Hi Andreas,
>> Thanks for your help.
>> Now I can get all the versions of a given document as well as
>> entire history of a given document. Still have a question
>> regarding getting document based on label. If label name is
>> "PRODUCTION", how do I get the document corresponding to
>> "PRODUCTION". Thanks, -Son.
>>  Andreas Probst <an...@gmx.net> wrote:I forgot one thing:
>>  There is no ONE VCR that belongs to a VR. 
>> There could be several VCR's which have the same VHR and VR's.
>> 
>> Happy New Year!
>> 
>> Andreas
>> 
>> 
>> On 31 Dec 2002 at 10:41, Andreas Probst wrote:
>> 
>> > Hi Son,
>> > 
>> > I'd recommend the DeltaV spec
>> > (http://www.ietf.org/rfc/rfc3253.txt).
>> > 
>> > Please, see intermixed.
>> > 
>> > 
>> > On 30 Dec 2002 at 13:18, Son Singh wrote:
>> > 
>> > > 
>> > > Hi,
>> > > 
>> > > I have been using Slide 1.0.16 from sometime. I have started
>> > > using Slide 2.x recently. I was successfully able to
>> > > configure slide for auto versioning. Now the question comes
>> > > here. 
>> > > 
>> > > 1. How do I get all the versions for a given document ? Is
>> > > there a client side API which can accomplish this ?
>> > 
>> > For each version-controlled resource (VCR) there is a directory
>> > (version history resource - VHR) below /history. The first VCR
>> > gets the 1, the second the 2 and so on. The version resource
>> > (VR) gets number 1.0, 1.1 and so on. So the first VR of the
>> > first VCR is identified by the URI /history/1/1.0.
>> > 
>> > One of the following two properties DAV:checked-in or 
>> > DAV:checked-out is set on each VCR. It gives you the path to
>> > the current VR of the VCR. (pages 21-23 of DeltaV spec)
>> > 
>> > > 
>> > > 2. How do I get different labels associated to a document ?
>> > > 
>> > > 3. How to retrieve the document of a given version number /
>> > > label.
>> > Use the DAV:locate-by-history Report (DeltaV spec p43)
>> > 
>> > > 
>> > > Any help / pointers will be highly appreciated.
>> > > 
>> > > Thanks in advance,
>> > > 
>> > > Son.
>> > > 
>> > > 
>> > 
>> > Andreas


--
To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: ssl-authorities-file

Posted by js...@pobox.com.
Thanks for replying.


>You're confusing the meaning of 'ssl-authorities-file'.  It means, "which
>CA's do I trust?"   It's supposed point to the certificate of the *CA* that
>signed the server cert, not to the server cert itself.

I'll not dispute this. However, my certificate is signed by GeoTrust. I 
went to their website (www.geotrust.com) and downloaded their certificate. 
I changed by servers file to point to it and still no joy.


Jason Stewart

Re: Timestamp Frustrations

Posted by tr...@clayst.com.
On 3 Jun 2005 Ben Collins-Sussman wrote:

> Well, using version control means changing your practices.  There's 
> no such thing as transparent version control.  :-) 

Fair enough :-).

> If the repository is FSFS-backed, rather than BDB, then it's fine to  
> access directly via file:/// over a network share.  Otherwise:  why  
> not just run a trivial 'svnserve' daemon and use a real network?   
> It's dead simple to set up, and will probably work faster than SMB.

I could do this if I wanted to.  My network consists of four Windows 
machines; and two Linux machines (fileserver and firewall), both 
running Slackware.  I normally access the fileserver from Windows via 
SMB but I could put the repository on there easily and use svnserve.  
However, since I currently store in on one Windows machine and don't 
mind opening it for sharing inside the network, leaving it there and 
using file:/// from the other also works, and as they say it ain't 
broke.

> Doubles the complexity?  Just use your current script for
> unversioned  things.  Use 'svn commit' and 'svn up' for the
> versioned stuff;  the  svn commands don't even need to be part of a
> script. 

Right now I can update with a single command (with which I have to 
interact).  Using svn for part of the update requires using a different 
approach for those directories, and that is more complex.

> I'm not sure I understand;  is there some difference between "commit  
> changes" and "copy files from one machine to another?"  Either you're  
> ready to broadcast work to other computers, or you're not.

The two computers are both used by me.  One is a desktop and one is a 
laptop.  I move back and forth between them often, depending on where, 
how, and when I'm working, and whether I need to take work out of the 
house.  I tend to use commit when I'm done with something, which is not 
at all synchronized with switching machines.

> >     - Using svn handles only the files under version control.  How do
> >     I also handle unversioned files in the directories that are under
> >     version control.
> 
> Keep using your timestamp scripts, I guess.

Well yes, but then that script needs to know not to muck with the files 
svn is updating.  I'd have to put them in a separate directory, another 
complexity.

> The 'live site' issue is so common, it's even an SVN FAQ:
> 
>     http://subversion.tigris.org/faq.html#website-auto-update
> 
> Basically, you have a post-commit hook run 'svn update' on a live  
> working-copy after every commit.  You get automatic publishing of  
> whatever has changed.

Thanks, I will look at that.  I would need multiple updates as I have 
both live and development sites on remote servers, and live and 
development branches in the repository.

> Um, your current process sounds anything but simple to me.  :-)

Ah, but I can do it without having to hardly think about it at all :-).

> The problem is that we're both speaking in generalities.  Maybe if  
> you posted a detailed description of your workflow, others on this  
> list could prescribe a new process for you.

Well sure, here is a typical transfer:

	- Do some work on project files
	- Look at clock
	- Uh oh, have to be at that meeting across town in 15 minutes,
		then I'm going to pick up the kids, maybe I'll sit at the
		library for the hour in between
	- Better take the laptop
	- Save files, in whatever state they're in, marking location of
		current effort (usually I just stick a marker in the source)
	- Close editor on desktop
	- Open laptop
	- Run update script to pick up changed email, project files,
		correspondence, images, etc. from desktop -- only about
		10% of what's copied is under version control

When I get back this is typical:

	- Start work on laptop in dining room while kids play
	- Oops, time to clean up for dinner
	- Realize that I'll be working on desktop later
	- Close editor, etc., as above
	- Run update script to put what I did for the afternoon back on
		desktop

You get the idea, I hope.  Most of what's transferred is not under 
version control, and when it is really all I'm doing in svn terms is 
keeping the working copy in the same state on both machines, but 
commiting little changes and half-done files does not seem to me to be 
what VCS is or should be for.

The original problem here was being able to preserve timestamps, for 
which I have a number of fairly conventional and legitimate uses.  
Significantly altering my work practices to get the benefits of VCS 
makes sense.  But having to use VCS to keep my laptop synchronized with 
my desktop 2 or 3 times a day is not the benefit I want from VCS, it's 
added complexity for no actual benefit that I can see.  If I'm changing 
my practice purely to work around the way a tool is designed, and 
adding complexity for no benefit just so I can do that, that seems to 
me like the tail wagging the dog.

On the other hand, one might conclude from the above that svn is not 
the right tool for the job I'm trying to do, but everything else I 
looked had other problems that were even worse ...

--
Tom




---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Timestamp Frustrations

Posted by Ben Collins-Sussman <su...@collab.net>.
On Jun 3, 2005, at 8:52 AM, trlists@clayst.com wrote:

> On 3 Jun 2005 Ben Collins-Sussman wrote:
>
>
>> Why not let the version control system do this for you?  Instead of
>> copying files around yourself based on timestamps, why not just have
>> a working copy on each machine?  Then all you need to do is run 'svn
>> update' to get the newest things on each box.
>>
>
> Thanks Ben.  I knew someone would say that :-).
>
> That works fine for that purpose.  It is a significant change in
> practice but I could get used to it.

Well, using version control means changing your practices.  There's  
no such thing as transparent version control.  :-)

>
>     - Right now the repository is stored locally on one machine.  Can
>     I access it from the other across the (Windows) network using
>     file:/// syntax if the repository drive is mapped?

If the repository is FSFS-backed, rather than BDB, then it's fine to  
access directly via file:/// over a network share.  Otherwise:  why  
not just run a trivial 'svnserve' daemon and use a real network?   
It's dead simple to set up, and will probably work faster than SMB.

>
>     - I manage the transfer of all kinds of other files between
>     machines with a single script that uses timestamps.  This approach
>     would require using svn for all the files under version  
> control, so
>     it doubles the complexity of updating.

Doubles the complexity?  Just use your current script for unversioned  
things.  Use 'svn commit' and 'svn up' for the versioned stuff;  the  
svn commands don't even need to be part of a script.


>
>     - Using svn update requires a commit on one machine before
>     updating on the other.  I switch back and forth between machines
>     sometimes 2 or 3 times a day, and often I'm not ready to commit
>     the work when I just happen to need to switch machines.  IOW the
>     version control cycle and the between-machines update cycle are
>     poorly matched. .

I'm not sure I understand;  is there some difference between "commit  
changes" and "copy files from one machine to another?"  Either you're  
ready to broadcast work to other computers, or you're not.

In the old system, you would copy stuff over via timestamps.  In the  
new system, you would 'svn commit', and then 'svn update'.


>
>     - Using svn handles only the files under version control.  How do
>     I also handle unversioned files in the directories that are under
>     version control.

Keep using your timestamp scripts, I guess.


>
>     - There are other things I do with timestamps -- for example
>     understanding if two files were changed at about the same time or
>     not, looking at which files I need to upload to deploy the changes
>     to my live site, etc.  The way svn manages the timestamps makes
>     this difficult.

The 'live site' issue is so common, it's even an SVN FAQ:

    http://subversion.tigris.org/faq.html#website-auto-update

Basically, you have a post-commit hook run 'svn update' on a live  
working-copy after every commit.  You get automatic publishing of  
whatever has changed.

>
> So it is a lot more than just saying "use svn update".  It could be
> done, but it adds a lot of complexity to what is currently a  
> relatively
> simple process.
>

Um, your current process sounds anything but simple to me.  :-)

The problem is that we're both speaking in generalities.  Maybe if  
you posted a detailed description of your workflow, others on this  
list could prescribe a new process for you.


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: [sandbox/i18n] Somebody please re-add

Posted by James Mitchell <jm...@apache.org>.
done.



--
James Mitchell
Software Engineer / Open Source Evangelist
Consulting / Mentoring / Freelance
EdgeTech, Inc.
http://www.edgetechservices.net/
678.910.8017
AIM:   jmitchtx
Yahoo: jmitchtx
MSN:   jmitchell@apache.org




----- Original Message ----- 
From: "Mattias J" <ma...@expertsystem.se>
To: "Jakarta Commons Developers List" <co...@jakarta.apache.org>
Sent: Wednesday, May 18, 2005 2:54 AM
Subject: [sandbox/i18n] Somebody please re-add


Daniel Florey seems to be unavailable at the time, so is there somebody
else with sandbox access that would be so kind to re-add the i18n test
cases that were added/modified in rev 167900 and accidentally removed in
rev 168590?

The diff is inlined below, but I will probably have to e-mail you a diff
file off list, to make sure wrapping and line breaks are intact.



Index: src/test/org/apache/commons/i18n/MessageManagerTest.java
===================================================================
--- src/test/org/apache/commons/i18n/MessageManagerTest.java (revision 0)
+++ src/test/org/apache/commons/i18n/MessageManagerTest.java (revision 0)
@@ -0,0 +1,115 @@
+/*
+ * Copyright 2005 The Apache Software Foundation
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.commons.i18n;
+
+import java.util.Locale;
+import java.util.Map;
+
+/**
+ * @author Mattias Jiderhamn
+ */
+public class MessageManagerTest extends MockProviderTestBase {
+
+    /** Dummy to add to constructor to coverage report */
+    public void testDummy() {
+        new MessageManager();
+    }
+
+    public void testGetText() {
+        assertEquals("Default text used", "defaultText",
+                MessageManager.getText("dummyId", "dummyEntry", null,
Locale.US, "defaultText"));
+        assertEquals("Default text with arguments", "defaultText with 
value",
+                MessageManager.getText("dummyId", "dummyEntry", new
String[] {"with value"},
+                        Locale.US, "defaultText {0}"));
+        try {
+            MessageManager.getText("dummyId", "dummyEntry", null, 
Locale.US);
+            fail("Entry not found should cause error");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("Error text", "No MessageProvider registered",
mnfex.getMessage());
+        }
+
+        addThrowingMockProvider(); // Add mock provider always throwing
exceptions
+
+        try {
+            MessageManager.getText("dummyId", "dummyEntry", null, 
Locale.US);
+            fail("Mock provider should throw Exception");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("Error text", "Mock exception from getText()",
mnfex.getMessage());
+        }
+
+        addMockProvider(); // Add mock provider
+
+        assertEquals("Throwing mock not used", "Id=dummyId
Entry=dummyEntry Locale=en_US",
+                MessageManager.getText("dummyId", "dummyEntry", null,
Locale.US, "defaultText"));
+
+        removeThrowingMockProvider(); // Removing throwing mock and keep
only normal mock
+
+        assertEquals("Default text not used", "Id=dummyId Entry=dummyEntry
Locale=en_US",
+                MessageManager.getText("dummyId", "dummyEntry", null,
Locale.US, "defaultText"));
+
+        assertEquals("Normal lookup", "Id=id Entry=entry Locale=en_US",
+                MessageManager.getText("id", "entry", null, Locale.US));
+        assertEquals("Single argument",
+                "Id=id Entry=entry value1 Locale=en_US",
+                MessageManager.getText("id", "entry {0}", new String[]
{"value1"}, Locale.US));
+        assertEquals("Multiple arguments",
+                "Id=id Entry=entry value0: value1 Locale=en_US",
+                MessageManager.getText("id", "entry {0}: {1}", new
String[] {"value0", "value1"},Locale.US));
+
+        assertEquals("Single argument and default",
+                "Id=id Entry=entry value1 Locale=en_US",
+                MessageManager.getText("id", "entry {0}", new String[]
{"value1"},Locale.US, "defaultText"));
+    }
+
+    public void testGetEntries() {
+        try {
+            MessageManager.getEntries("dummyId", Locale.US);
+            fail("Entry not found should cause error");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("Error text", "No MessageProvider registered",
mnfex.getMessage());
+        }
+
+        addThrowingMockProvider(); // Add mock provider always throwing
exceptions
+
+        try {
+            MessageManager.getEntries("dummyId", Locale.US);
+            fail("Mock provider should throw Exception");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("Error text", "Mock exception from getEntries()",
mnfex.getMessage());
+        }
+
+        addMockProvider(); // Add mock provider
+
+        Map entries = MessageManager.getEntries("dummyId", Locale.US);
+        assertEquals("No of entries", 2, entries.size());
+        assertEquals("Entry 1 match", "Id=dummyId Entry=entry1
Locale=en_US", entries.get("entry1"));
+        assertEquals("Entry 2 match", "Id=dummyId Entry=entry2
Locale=en_US", entries.get("entry2"));
+
+        removeThrowingMockProvider(); // Removing throwing mock and keep
only normal mock
+
+        addMockProvider(); // Add mock provider
+
+        entries = MessageManager.getEntries("dummyId", Locale.US);
+        assertEquals("No of entries", 2, entries.size());
+        assertEquals("Entry 1 match", "Id=dummyId Entry=entry1
Locale=en_US", entries.get("entry1"));
+        assertEquals("Entry 2 match", "Id=dummyId Entry=entry2
Locale=en_US", entries.get("entry2"));
+    }
+}
\ No newline at end of file
Index: src/test/org/apache/commons/i18n/MockProviderTestBase.java
===================================================================
--- src/test/org/apache/commons/i18n/MockProviderTestBase.java (revision 0)
+++ src/test/org/apache/commons/i18n/MockProviderTestBase.java (revision 0)
@@ -0,0 +1,92 @@
+/*
+ * Copyright 2005 The Apache Software Foundation
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.commons.i18n;
+
+import junit.framework.TestCase;
+
+import java.util.Locale;
+import java.util.Map;
+import java.util.HashMap;
+import java.text.MessageFormat;
+
+/**
+ * The <code>MockProviderTestBase</code> class serves as a base class for
test cases using a mock
+ * <code>MessageProvider</code>. After every test, it will remove the mock
message provider to prepare
+ * for other tests.
+ * @author Mattias Jiderhamn
+ */
+public abstract class MockProviderTestBase extends TestCase {
+    /**
+     * Mock message provider that returns a string made up of the
arguments passed to it.
+     */
+    final private MessageProvider mockMessageProvider = new
MessageProvider() {
+        public String getText(String id, String entry, Locale locale)
throws MessageNotFoundException {
+            return MockProviderTestBase.getMockString(id, entry, locale);
+        }
+
+        public Map getEntries(String id, Locale locale) throws
MessageNotFoundException {
+            Map output = new HashMap();
+            output.put("entry1",
MockProviderTestBase.getMockString(id,"entry1",locale));
+            output.put("entry2",
MockProviderTestBase.getMockString(id,"entry2",locale));
+            return output;
+        }
+    };
+
+    public void tearDown() {
+        /* Remove mock provider after each test, to allow for
MessageNotFoundExceptions */
+        MessageManager.removeMessageProvider("mock");
+        removeThrowingMockProvider();
+    }
+
+    /**
+     * Add mock provider to <code>MessageManager</code>.
+     */
+    protected void addMockProvider() {
+        MessageManager.addMessageProvider("mock", mockMessageProvider);
+    }
+
+    /**
+     * Add provider that always throws error to 
<code>MessageManager</code>.
+     */
+    protected void addThrowingMockProvider() {
+        MessageManager.addMessageProvider("throwingMock", new
MessageProvider() {
+            public String getText(String id, String entry, Locale locale)
throws MessageNotFoundException {
+                throw new MessageNotFoundException("Mock exception from
getText()");
+            }
+
+            public Map getEntries(String id, Locale locale) throws
MessageNotFoundException {
+                throw new MessageNotFoundException("Mock exception from
getEntries()");
+            }
+        });
+    }
+
+    protected void removeThrowingMockProvider() {
+        MessageManager.removeMessageProvider("throwingMock");
+    }
+
+ 
////////////////////////////////////////////////////////////////////////
+    // Utility methods
+ 
////////////////////////////////////////////////////////////////////////
+
+    public static String getMockString(String id, String entry, Locale
locale) throws MessageNotFoundException {
+        return "Id=" + id + " Entry=" + entry + " Locale=" + locale + "";
+    }
+
+    public static String getFormattedMockString(String id, String entry,
String[] arguments, Locale locale) {
+        return MessageFormat.format(getMockString(id, entry, locale),
arguments);
+    }
+}
Index: src/test/org/apache/commons/i18n/LocalizedBundleTest.java
===================================================================
--- src/test/org/apache/commons/i18n/LocalizedBundleTest.java (revision 0)
+++ src/test/org/apache/commons/i18n/LocalizedBundleTest.java (revision 0)
@@ -0,0 +1,77 @@
+/*
+ * Copyright 2005 The Apache Software Foundation
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.commons.i18n;
+
+import java.util.Locale;
+
+/**
+ * @author Mattias Jiderhamn
+ */
+public class LocalizedBundleTest extends MockProviderTestBase {
+    public void testConstructors() {
+        LocalizedBundle lb = new LocalizedBundle("dummyId1");
+        assertEquals("Id set", "dummyId1", lb.getId());
+        assertNotNull("Arguments not null", lb.getArguments());
+        assertEquals("No arguments", 0, lb.getArguments().length);
+
+        String[] arguments = new String[]{"arg1", "arg2"};
+        LocalizedBundle lbArgs = new LocalizedBundle("dummyId2", 
arguments);
+        assertEquals("Id set", "dummyId2", lbArgs.getId());
+        assertNotNull("Arguments not null", lbArgs.getArguments());
+        assertEquals("No of arguments", 2, lbArgs.getArguments().length);
+        assertEquals("Arguments", arguments, lbArgs.getArguments());
+    }
+
+    public void testGetEntry() {
+        LocalizedBundle lb = new LocalizedBundle("dummyId1");
+        LocalizedBundle lbArgs = new LocalizedBundle("dummyId2", new
String[] {"arg1", "arg2"});
+
+        // Test errors
+        try {
+            lb.getEntry("dummyEntry", Locale.US);
+            fail("Entry not found should cause error");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("Error text", "No MessageProvider registered",
mnfex.getMessage());
+        }
+        try {
+            lbArgs.getEntry("dummyEntry", Locale.US);
+            fail("Entry not found should cause error");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("Error text", "No MessageProvider registered",
mnfex.getMessage());
+        }
+
+        // Test default texts
+        assertEquals("Default text", "defaultText",
lb.getEntry("dummyEntry", Locale.US, "defaultText"));
+        assertEquals("Default text with arguments", "defaultText with arg1
arg2", lbArgs.getEntry("dummyEntry",
+                        Locale.US, "defaultText with {0} {1}"));
+
+        addMockProvider(); // Add mock provider
+
+        assertEquals("Default text not used", "Id=dummyId1
Entry=dummyEntry Locale=en_US",
+                lb.getEntry("dummyEntry", Locale.US, "defaltText"));
+
+        assertEquals("Normal lookup", "Id=dummyId1 Entry=entry
Locale=en_US", lb.getEntry("entry", Locale.US));
+        assertEquals("Arguments missing", "Id=dummyId1 Entry=entry {0}
Locale=en_US",
+                lb.getEntry("entry {0}", Locale.US));
+        assertEquals("Argument", "Id=dummyId2 Entry=entry arg1 arg2
Locale=en_US",
+                lbArgs.getEntry("entry {0} {1}", Locale.US));
+        assertEquals("Arguments and default", "Id=dummyId2 Entry=entry
arg1 arg2 Locale=en_US",
+                lbArgs.getEntry("entry {0} {1}", Locale.US, 
"defaultText"));
+    }
+}
Index: src/test/org/apache/commons/i18n/LocalizedErrorTest.java
===================================================================
--- src/test/org/apache/commons/i18n/LocalizedErrorTest.java (revision 0)
+++ src/test/org/apache/commons/i18n/LocalizedErrorTest.java (revision 0)
@@ -0,0 +1,61 @@
+/*
+ * Copyright 2005 The Apache Software Foundation
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.commons.i18n;
+
+import org.apache.commons.i18n.bundles.ErrorBundle;
+
+import java.util.Locale;
+
+/**
+ * @author Mattias Jiderhamn
+ */
+public class LocalizedErrorTest extends MockProviderTestBase {
+    public void testLocalizedErrorWithCause() {
+        Throwable cause = new Exception("foo");
+        ErrorBundle errorBundle = new ErrorBundle("errorMessageId");
+        LocalizedError le = new LocalizedError(errorBundle, cause);
+        assertEquals("Cause", cause, le.getCause());
+        assertEquals("Error bundle", errorBundle, le.getErrorMessage());
+        assertEquals("Error message", cause.getMessage(), le.getMessage());
+
+        addMockProvider(); // Add mock provider
+
+        LocalizedError le2 = new LocalizedError(errorBundle, cause);
+        assertEquals("Cause", cause, le2.getCause());
+        assertEquals("Error bundle", errorBundle, le2.getErrorMessage());
+        assertEquals("Error message", getMockString("errorMessageId",
ErrorBundle.SUMMARY, Locale.getDefault()),
+                le2.getMessage());
+    }
+
+
+    public void testLocalizedErrorWithoutCause() {
+        ErrorBundle errorBundle = new ErrorBundle("errorMessageId");
+        LocalizedError le = new LocalizedError(errorBundle);
+        assertNull("Cause", le.getCause());
+        assertEquals("Error bundle", errorBundle, le.getErrorMessage());
+        assertEquals("Error message",
+                "Message bundle with key errorMessageId does not contain
an entry with key summary", le.getMessage());
+
+        addMockProvider(); // Add mock provider
+
+        LocalizedError le2 = new LocalizedError(errorBundle);
+        assertNull("Cause", le.getCause());
+        assertEquals("Error bundle", errorBundle, le2.getErrorMessage());
+        assertEquals("Error message", getMockString("errorMessageId",
ErrorBundle.SUMMARY, Locale.getDefault()),
+                le2.getMessage());
+    }
+}
Index: src/test/org/apache/commons/i18n/MessageNotFoundExceptionTest.java
===================================================================
--- src/test/org/apache/commons/i18n/MessageNotFoundExceptionTest.java
(revision 0)
+++ src/test/org/apache/commons/i18n/MessageNotFoundExceptionTest.java
(revision 0)
@@ -0,0 +1,31 @@
+/*
+ * Copyright 2005 The Apache Software Foundation
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.commons.i18n;
+
+import junit.framework.TestCase;
+
+/**
+ * Meaningless test class for with the only purpose of making the
+ * coverage report look better...
+ * @author Mattias Jiderhamn
+ */
+public class MessageNotFoundExceptionTest extends TestCase {
+    public void testConstruction() {
+        new MessageNotFoundException("");
+        new MessageNotFoundException("foo", new Exception("bar"));
+    }
+}
Index: src/test/org/apache/commons/i18n/XMLMessageProviderTest.java
===================================================================
--- src/test/org/apache/commons/i18n/XMLMessageProviderTest.java (revision 
0)
+++ src/test/org/apache/commons/i18n/XMLMessageProviderTest.java (revision 
0)
@@ -0,0 +1,187 @@
+/*
+*
+* ====================================================================
+*
+* Copyright 2004 The Apache Software Foundation
+*
+* Licensed under the Apache License, Version 2.0 (the "License");
+* you may not use this file except in compliance with the License.
+* You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*
+*/
+package org.apache.commons.i18n;
+
+import java.util.Locale;
+import java.util.Map;
+
+import org.apache.commons.i18n.bundles.MessageBundle;
+
+import junit.framework.TestCase;
+
+/**
+ * @author Daniel Florey
+ *
+ */
+public class XMLMessageProviderTest extends TestCase {
+
+    public void setUp() {
+        /* Make sure en_US is the default Locale for tests */
+        Locale.setDefault(Locale.US);
+    }
+
+    public void tearDown() {
+        /* Uninstall resource bundles after every test */
+        XMLMessageProvider.uninstall("org.apache.commons-i18n.test");
+        XMLMessageProvider.uninstall("org.apache.commons-i18n.error");
+        XMLMessageProvider.uninstall("org.apache.commons-i18n.variants");
+    }
+
+    public void testInstallResourceBundle() {
+        MessageBundle testMessage = new MessageBundle("helloWorld");
+
+        try {
+            testMessage.getTitle(Locale.GERMAN);
+            fail("XML file not installed, should throw exception");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("No MessageProvider registered", 
mnfex.getMessage());
+        }
+
+        XMLMessageProvider.install("org.apache.commons-i18n.test",
+
Thread.currentThread().getContextClassLoader().getResourceAsStream("testMessages.xml"));
+
+        assertEquals("Hallo Welt", testMessage.getTitle(Locale.GERMAN));
+
+        XMLMessageProvider.update("org.apache.commons-i18n.test",
+
Thread.currentThread().getContextClassLoader().getResourceAsStream("testMessages.xml"));
+
+        assertEquals("OK after update", "Hallo Welt",
testMessage.getTitle(Locale.GERMAN));
+
+        XMLMessageProvider.uninstall("org.apache.commons-i18n.test");
+
+        try {
+            testMessage.getTitle(Locale.GERMAN);
+            fail("XML file uinstalled, should throw exception");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("No MessageProvider registered", 
mnfex.getMessage());
+        }
+
+        // Try to parse non-XML file
+        XMLMessageProvider.install("org.apache.commons-i18n.error",
+
Thread.currentThread().getContextClassLoader().getResourceAsStream("messageBundle.properties"));
+    }
+
+    public void testGetText() {
+//        XMLMessageProvider.install("org.apache.commons-i18n.test",
+//
Thread.currentThread().getContextClassLoader().getResourceAsStream("testMessages.xml"));
+        XMLMessageProvider xmlmp = new
XMLMessageProvider("org.apache.commons-i18n.test",
+
Thread.currentThread().getContextClassLoader().getResourceAsStream("testMessages.xml"));
+
+        assertEquals("Default locale", "hello world",
xmlmp.getText("helloWorld", "title", Locale.US));
+        assertEquals("Default locale", "hello world",
xmlmp.getText("helloWorld", "title", Locale.UK));
+        assertEquals("Additional locale", "Hallo Welt",
xmlmp.getText("helloWorld", "title", Locale.GERMAN));
+        assertEquals("Language and country using parent", "Hallo Welt",
xmlmp.getText("helloWorld", "title",
+                new Locale("de", "CH")));
+        assertEquals("Language, country and variant using parent", "Hallo
Welt", xmlmp.getText("helloWorld", "title",
+                new Locale("de", "CH", "foo")));
+        assertEquals("Fallback locale", "hello world",
xmlmp.getText("helloWorld", "title", Locale.JAPANESE));
+        // TODO: Wait for Daniels reply on whether this is intended
+        // assertEquals("Fallback when only in default", "This entry is
not translated to any other languages",
+        //         xmlmp.getText("helloWorld", "notTranslated",
Locale.GERMAN));
+
+//        ResourceBundleMessageProvider.install("messageBundle2"); //
Install another bundle
+//        assertEquals("This message exists in another resource bundle",
xmlmp.getText("onlyInSecond", "title", Locale.US));
+
+        try {
+            xmlmp.getText("nonExistentId", "nonExistentEntry", Locale.US);
+            fail("ID does not exist, should throw exception");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("Message with key nonExistentId not found",
mnfex.getMessage());
+        }
+
+        // TODO: Wait for Daniels reply on whether this is intended
+        /*
+        try {
+            String s = xmlmp.getText("helloWorld", "nonExistentEntry",
Locale.US);
+            fail("Entry does not exist, should throw exception. Entry was:
'" + s + "'");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("No message entries found for bundle with key
helloWorld", mnfex.getMessage());
+        }
+        */
+    }
+
+    public void testGetTextVariants() {
+//        XMLMessageProvider.install("org.apache.commons-i18n.variants",
+//
Thread.currentThread().getContextClassLoader().getResourceAsStream("variantTestMessages.xml"));
+        XMLMessageProvider xmlmp = new
XMLMessageProvider("org.apache.commons-i18n.variants",
+
Thread.currentThread().getContextClassLoader().getResourceAsStream("variantTestMessages.xml"));
+
+        assertEquals("hello world", xmlmp.getText("variants", "theKey",
Locale.ENGLISH));
+        assertEquals("Botswana", "Hello Botswana",
xmlmp.getText("variants", "theKey", new Locale("", "BW")));
+        assertEquals("Awrite warld", xmlmp.getText("variants", "theKey",
new Locale("en", "GB", "scottish")));
+        assertEquals("Ga, ga, ga", xmlmp.getText("variants", "theKey", new
Locale("en", "", "baby")));
+    }
+
+    public void testGetEntries() {
+//        XMLMessageProvider.install("org.apache.commons-i18n.test",
+//
Thread.currentThread().getContextClassLoader().getResourceAsStream("testMessages.xml"));
+        Map usEntries = new 
XMLMessageProvider("org.apache.commons-i18n.test",
+
Thread.currentThread().getContextClassLoader().getResourceAsStream("testMessages.xml")).
+                    getEntries("helloWorld", Locale.US);
+        assertEquals("Default locale, no of entries", 5, usEntries.size());
+        assertEquals("Default locale, titel", "hello world",
usEntries.get("title"));
+        assertEquals("Default locale, text", "hello world, we are in
{0}.", usEntries.get("text"));
+        assertEquals("Default locale, text",
+                "sample summary to test english messages. Country = {0},
language = {1} and variant = {2}.",
+                usEntries.get("summary"));
+        assertEquals("Default locale, text",
+                "sample deatils to test english messages. Country = {0},
language = {1} and variant = {2}.",
+                usEntries.get("details"));
+        assertEquals("This entry is not translated to any other languages
(XML)", usEntries.get("notTranslated"));
+
+        Map germanEntries = new
XMLMessageProvider("org.apache.commons-i18n.test",
+
Thread.currentThread().getContextClassLoader().getResourceAsStream("testMessages.xml")).
+                    getEntries("helloWorld", Locale.GERMAN);
+        assertEquals("No of entries", 4, germanEntries.size());
+        assertEquals("Hallo Welt", germanEntries.get("title"));
+        assertEquals("Wir sind in {0}.", germanEntries.get("text"));
+        assertEquals("sample summary to test german messages. Country =
{0}, language = {1} and variant = {2}.",
+                germanEntries.get("summary"));
+        assertEquals("sample deatils to test german messages. Country =
{0}, language = {1} and variant = {2}.",
+                germanEntries.get("details"));
+//        assertEquals("This entry is not translated to any other
languages", germanEntries.get("notTranslated"));
+
+        Map japaneseEntries = new
XMLMessageProvider("org.apache.commons-i18n.test",
+
Thread.currentThread().getContextClassLoader().getResourceAsStream("testMessages.xml")).
+                    getEntries("helloWorld", Locale.JAPANESE);
+        assertEquals("Fallback locale, no of entries", 5,
japaneseEntries.size());
+
+        assertEquals("Fallback locale, titel", "hello world",
usEntries.get("title"));
+        assertEquals("Fallback locale, text", "hello world, we are in
{0}.", japaneseEntries.get("text"));
+        assertEquals("Fallback locale, text",
+                "sample summary to test english messages. Country = {0},
language = {1} and variant = {2}.",
+                japaneseEntries.get("summary"));
+        assertEquals("Fallback locale, text",
+                "sample deatils to test english messages. Country = {0},
language = {1} and variant = {2}.",
+                japaneseEntries.get("details"));
+        assertEquals("This entry is not translated to any other languages
(XML)", japaneseEntries.get("notTranslated"));
+    }
+
+    /**
+     * Constructor for MessageManagerTest.
+     */
+    public XMLMessageProviderTest(String testName) {
+        super(testName);
+    }
+}
Index: 
src/test/org/apache/commons/i18n/ResourceBundleMessageProviderTest.java
===================================================================
--- src/test/org/apache/commons/i18n/ResourceBundleMessageProviderTest.java
(revision 0)
+++ src/test/org/apache/commons/i18n/ResourceBundleMessageProviderTest.java
(revision 0)
@@ -0,0 +1,168 @@
+/*
+*
+* ====================================================================
+*
+* Copyright 2004 The Apache Software Foundation
+*
+* Licensed under the Apache License, Version 2.0 (the "License");
+* you may not use this file except in compliance with the License.
+* You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*
+*/
+package org.apache.commons.i18n;
+
+import java.util.Locale;
+import java.util.Map;
+
+import junit.framework.TestCase;
+
+import org.apache.commons.i18n.bundles.MessageBundle;
+
+/**
+ * @author Daniel Florey
+ *
+ */
+public class ResourceBundleMessageProviderTest extends TestCase {
+    public ResourceBundleMessageProviderTest(String testName) {
+        super(testName);
+    }
+
+    public void setUp() {
+        /* Make sure en_US is the default Locale for tests */
+        Locale.setDefault(Locale.US);
+    }
+
+    public void tearDown() {
+        /* Uninstall resource bundles after every test */
+        ResourceBundleMessageProvider.uninstall("messageBundle");
+        ResourceBundleMessageProvider.uninstall("messageBundle2");
+        ResourceBundleMessageProvider.uninstall("nonExistentBundle");
+
ResourceBundleMessageProvider.uninstall("org.apache.commons.i18n.MyListResourceBundle");
+    }
+
+    public void testInstallResourceBundle() {
+        MessageBundle testMessage = new MessageBundle("helloWorld");
+
+        try {
+            testMessage.getTitle(Locale.GERMAN);
+            fail("ResourceBundle not installed, should throw exception");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("No MessageProvider registered", 
mnfex.getMessage());
+        }
+
+        ResourceBundleMessageProvider.install("messageBundle");
+
+        assertEquals("Hallo Welt", testMessage.getTitle(Locale.GERMAN));
+
+        ResourceBundleMessageProvider.update("messageBundle");
+
+        assertEquals("OK after update", "Hallo Welt",
testMessage.getTitle(Locale.GERMAN));
+
+        ResourceBundleMessageProvider.uninstall("messageBundle");
+
+        try {
+            testMessage.getTitle(Locale.GERMAN);
+            fail("ResourceBundle uinstalled, should throw exception");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("No MessageProvider registered", 
mnfex.getMessage());
+        }
+    }
+
+    public void testGetText() {
+        ResourceBundleMessageProvider rbmp = new
ResourceBundleMessageProvider("messageBundle");
+
+        assertEquals("Default locale", "Hello World",
rbmp.getText("helloWorld", "title", Locale.US));
+        assertEquals("Additional locale", "Hallo Welt",
rbmp.getText("helloWorld", "title", Locale.GERMAN));
+        assertEquals("Fallback locale", "Hello World",
rbmp.getText("helloWorld", "title", Locale.FRENCH));
+        assertEquals("Fallback when only in default", "This entry is not
translated to any other languages",
+                rbmp.getText("helloWorld", "notTranslated", 
Locale.GERMAN));
+
+        // Test with list resource bundle
+//        ResourceBundleMessageProvider.uninstall("messageBundle"); // 
Remove
+//
ResourceBundleMessageProvider.install("org.apache.commons.i18n.MyListResourceBundle");
// Install ListResourceBundle
+        ResourceBundleMessageProvider listResourceBundleProvider =
+                new
ResourceBundleMessageProvider("org.apache.commons.i18n.MyListResourceBundle");
// Install ListResourceBundle
+        assertEquals("Value from ListResourceBundle", "listResourceValue",
listResourceBundleProvider.getText("helloWorld", "title", Locale.US));
+        try {
+            String s = listResourceBundleProvider.getText("helloWorld",
"text", Locale.US);
+            fail("Entry should not be found, since it is numeric. Found
text: " + s);
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("No message entries found for bundle with key
helloWorld", mnfex.getMessage());
+        }
+
+        try {
+            rbmp.getText("nonExistentId", "nonExistentEntry", Locale.US);
+            fail("ID does not exist, should throw exception");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("No message entries found for bundle with key
nonExistentId", mnfex.getMessage());
+        }
+
+        try {
+            rbmp.getText("helloWorld", "nonExistentEntry", Locale.US);
+            fail("Entry does not exist, should throw exception");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("No message entries found for bundle with key
helloWorld", mnfex.getMessage());
+        }
+
+        // Test unexisting bundle which should throw 
MissingResourceException
+        ResourceBundleMessageProvider.install("nonExistentBundle"); //
Install non-existent bundle
+        ResourceBundleMessageProvider.update("messageBundle"); // Place
last in list
+        rbmp.getText("helloWorld", "title", Locale.US); // Should not
throw Exception
+
+        ResourceBundleMessageProvider nonExistentBundleProvider = new
ResourceBundleMessageProvider("nonExistentBundle");
+        try {
+            nonExistentBundleProvider.getText("fooBar", "text",
Locale.GERMAN);
+            fail("Bundle does not exist and should cause error");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("No message entries found for bundle with key
fooBar", mnfex.getMessage());
+        }
+    }
+
+    public void testGetEntries() {
+//        ResourceBundleMessageProvider.install("messageBundle");
+        Map usEntries = new
ResourceBundleMessageProvider("messageBundle").getEntries("helloWorld",
Locale.US);
+        assertEquals("Default locale, no of entries", 3, usEntries.size());
+        assertEquals("Default locale, titel", "Hello World",
usEntries.get("title"));
+        assertEquals("Default locale, text", "I wish you a merry
christmas!", usEntries.get("text"));
+        assertEquals("This entry is not translated to any other
languages", usEntries.get("notTranslated"));
+
+        Map germanEntries = new
ResourceBundleMessageProvider("messageBundle").getEntries("helloWorld",
Locale.GERMAN);
+        assertEquals("No of entries", 3, germanEntries.size());
+        assertEquals("Hallo Welt", germanEntries.get("title"));
+        assertEquals("Ich wünsche Dir alles Gute und ein frohes Fest!",
germanEntries.get("text"));
+        assertEquals("This entry is not translated to any other
languages", germanEntries.get("notTranslated"));
+
+        Map frenchEntries = new
ResourceBundleMessageProvider("messageBundle").getEntries("helloWorld",
Locale.FRENCH);
+        assertEquals("Fallback locale, no of entries", 3,
frenchEntries.size());
+        assertEquals("Fallback locale, titel", "Hello World",
frenchEntries.get("title"));
+        assertEquals("Fallback locale, text", "I wish you a merry
christmas!", frenchEntries.get("text"));
+        assertEquals("This entry is not translated to any other
languages", frenchEntries.get("notTranslated"));
+
+
+        // Test unexisting bundle which should throw 
MissingResourceException
+//        ResourceBundleMessageProvider.install("nonExistentBundle"); //
Install non-existent bundle
+//        ResourceBundleMessageProvider.update("messageBundle"); // Place
last in list
+        ResourceBundleMessageProvider nonExistentBundleProvider = new
ResourceBundleMessageProvider("nonExistentBundle");
+        try {
+            nonExistentBundleProvider.getEntries("fooBar", Locale.GERMAN);
+            fail("Bundle does not exist and should cause error");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("No message entries found for bundle with key
fooBar", mnfex.getMessage());
+        }
+    }
+}
\ No newline at end of file
Index: src/test/org/apache/commons/i18n/MyListResourceBundle.java
===================================================================
--- src/test/org/apache/commons/i18n/MyListResourceBundle.java (revision 0)
+++ src/test/org/apache/commons/i18n/MyListResourceBundle.java (revision 0)
@@ -0,0 +1,35 @@
+/*
+ * Copyright 2005 The Apache Software Foundation
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.commons.i18n;
+
+import java.util.ListResourceBundle;
+
+/**
+ * ListResourceBundle implementation used to test ClassCastException in
+ * <code>ResourceBundleMessageProvider</code>
+ * @author Mattias Jiderhamn
+ */
+public class MyListResourceBundle extends ListResourceBundle {
+    public Object[][] getContents() {
+        return contents;
+    }
+
+    static final Object[][] contents = {
+        {"helloWorld.title", "listResourceValue"},
+        {"helloWorld.text", new Integer(1)} // Should cause 
ClassCastException
+    };
+}
Index: src/test/org/apache/commons/i18n/LocalizedExceptionTest.java
===================================================================
--- src/test/org/apache/commons/i18n/LocalizedExceptionTest.java (revision 
0)
+++ src/test/org/apache/commons/i18n/LocalizedExceptionTest.java (revision 
0)
@@ -0,0 +1,61 @@
+/*
+ * Copyright 2005 The Apache Software Foundation
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.commons.i18n;
+
+import org.apache.commons.i18n.bundles.ErrorBundle;
+
+import java.util.Locale;
+
+/**
+ * @author Mattias Jiderhamn
+ */
+public class LocalizedExceptionTest extends MockProviderTestBase {
+    public void testLocalizedErrorWithCause() {
+        Throwable cause = new Exception("foo");
+        ErrorBundle errorBundle = new ErrorBundle("errorMessageId");
+        LocalizedException le = new LocalizedException(errorBundle, cause);
+        assertEquals("Cause", cause, le.getCause());
+        assertEquals("Error bundle", errorBundle, le.getErrorMessage());
+        assertEquals("Error message", cause.getMessage(), le.getMessage());
+
+        addMockProvider(); // Add mock provider
+
+        LocalizedException le2 = new LocalizedException(errorBundle, 
cause);
+        assertEquals("Cause", cause, le2.getCause());
+        assertEquals("Error bundle", errorBundle, le2.getErrorMessage());
+        assertEquals("Error message", getMockString("errorMessageId",
ErrorBundle.SUMMARY, Locale.getDefault()),
+                le2.getMessage());
+    }
+
+
+    public void testLocalizedErrorWithoutCause() {
+        ErrorBundle errorBundle = new ErrorBundle("errorMessageId");
+        LocalizedException le = new LocalizedException(errorBundle);
+        assertNull("Cause", le.getCause());
+        assertEquals("Error bundle", errorBundle, le.getErrorMessage());
+        assertEquals("Error message",
+                "Message bundle with key errorMessageId does not contain
an entry with key summary", le.getMessage());
+
+        addMockProvider(); // Add mock provider
+
+        LocalizedException le2 = new LocalizedException(errorBundle);
+        assertNull("Cause", le.getCause());
+        assertEquals("Error bundle", errorBundle, le2.getErrorMessage());
+        assertEquals("Error message", getMockString("errorMessageId",
ErrorBundle.SUMMARY, Locale.getDefault()),
+                le2.getMessage());
+    }
+}
Index: src/test/org/apache/commons/i18n/I18nTestSuite.java
===================================================================
--- src/test/org/apache/commons/i18n/I18nTestSuite.java (revision 0)
+++ src/test/org/apache/commons/i18n/I18nTestSuite.java (revision 0)
@@ -0,0 +1,38 @@
+/*
+*
+* ====================================================================
+*
+* Copyright 2004 The Apache Software Foundation
+*
+* Licensed under the Apache License, Version 2.0 (the "License");
+* you may not use this file except in compliance with the License.
+* You may obtain a copy of the License at
+*
+*     http://www.apache.org/licenses/LICENSE-2.0
+*
+* Unless required by applicable law or agreed to in writing, software
+* distributed under the License is distributed on an "AS IS" BASIS,
+* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+* See the License for the specific language governing permissions and
+* limitations under the License.
+*
+*/
+package org.apache.commons.i18n;
+
+import junit.framework.Test;
+import junit.framework.TestSuite;
+
+/**
+ * @author Daniel Florey
+ *
+ */
+public class I18nTestSuite extends TestSuite {
+    public static void main(java.lang.String[] args) {
+        junit.textui.TestRunner.run(suite());
+    }
+
+    public static Test suite() {
+        TestSuite suite = new
TestSuite(ResourceBundleMessageProviderTest.class);
+        return suite;
+    }
+}
Index: src/test/org/apache/commons/i18n/LocalizedRuntimeExceptionTest.java
===================================================================
--- src/test/org/apache/commons/i18n/LocalizedRuntimeExceptionTest.java
(revision 0)
+++ src/test/org/apache/commons/i18n/LocalizedRuntimeExceptionTest.java
(revision 0)
@@ -0,0 +1,61 @@
+/*
+ * Copyright 2005 The Apache Software Foundation
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.commons.i18n;
+
+import org.apache.commons.i18n.bundles.ErrorBundle;
+
+import java.util.Locale;
+
+/**
+ * @author Mattias Jiderhamn
+ */
+public class LocalizedRuntimeExceptionTest extends MockProviderTestBase {
+    public void testLocalizedErrorWithCause() {
+        Throwable cause = new Exception("foo");
+        ErrorBundle errorBundle = new ErrorBundle("errorMessageId");
+        LocalizedRuntimeException le = new
LocalizedRuntimeException(errorBundle, cause);
+        assertEquals("Cause", cause, le.getCause());
+        assertEquals("Error bundle", errorBundle, le.getErrorMessage());
+        assertEquals("Error message", cause.getMessage(), le.getMessage());
+
+        addMockProvider(); // Add mock provider
+
+        LocalizedRuntimeException le2 = new
LocalizedRuntimeException(errorBundle, cause);
+        assertEquals("Cause", cause, le2.getCause());
+        assertEquals("Error bundle", errorBundle, le2.getErrorMessage());
+        assertEquals("Error message", getMockString("errorMessageId",
ErrorBundle.SUMMARY, Locale.getDefault()),
+                le2.getMessage());
+    }
+
+
+    public void testLocalizedErrorWithoutCause() {
+        ErrorBundle errorBundle = new ErrorBundle("errorMessageId");
+        LocalizedRuntimeException le = new
LocalizedRuntimeException(errorBundle);
+        assertNull("Cause", le.getCause());
+        assertEquals("Error bundle", errorBundle, le.getErrorMessage());
+        assertEquals("Error message",
+                "Message bundle with key errorMessageId does not contain
an entry with key summary", le.getMessage());
+
+        addMockProvider(); // Add mock provider
+
+        LocalizedRuntimeException le2 = new
LocalizedRuntimeException(errorBundle);
+        assertNull("Cause", le.getCause());
+        assertEquals("Error bundle", errorBundle, le2.getErrorMessage());
+        assertEquals("Error message", getMockString("errorMessageId",
ErrorBundle.SUMMARY, Locale.getDefault()),
+                le2.getMessage());
+    }
+}
Index: src/test/org/apache/commons/i18n/bundles/ErrorBundleTest.java
===================================================================
--- src/test/org/apache/commons/i18n/bundles/ErrorBundleTest.java (revision 
0)
+++ src/test/org/apache/commons/i18n/bundles/ErrorBundleTest.java (revision 
0)
@@ -0,0 +1,86 @@
+/*
+ * Copyright 2005 The Apache Software Foundation
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.commons.i18n.bundles;
+
+import org.apache.commons.i18n.MockProviderTestBase;
+import org.apache.commons.i18n.MessageNotFoundException;
+
+import java.util.Locale;
+
+/**
+ * @author Mattias Jiderhamn
+ */
+public class ErrorBundleTest extends MockProviderTestBase {
+    public void testWithoutArguments() {
+        ErrorBundle eb = new ErrorBundle("dummyId");
+        try {
+            eb.getText(Locale.US);
+            fail("Entry not found should cause error");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("No MessageProvider registered", 
mnfex.getMessage());
+        }
+        assertEquals("Default used", "defaultText", eb.getText(Locale.US,
"defaultText"));
+
+        addMockProvider();
+
+        assertEquals("Normal use", getMockString("dummyId",
ErrorBundle.TEXT, Locale.US), eb.getText(Locale.US));
+        assertEquals("Normal use", getMockString("dummyId",
ErrorBundle.TITLE, Locale.US), eb.getTitle(Locale.US));
+        assertEquals("Normal use", getMockString("dummyId",
ErrorBundle.SUMMARY, Locale.US), eb.getSummary(Locale.US));
+        assertEquals("Normal use", getMockString("dummyId",
ErrorBundle.DETAILS, Locale.US), eb.getDetails(Locale.US));
+        assertEquals("Default not used", getMockString("dummyId",
ErrorBundle.TEXT, Locale.US),
+                eb.getText(Locale.US, "defaultText"));
+        assertEquals("Default not used", getMockString("dummyId",
ErrorBundle.TITLE, Locale.US),
+                eb.getTitle(Locale.US, "defaultText"));
+        assertEquals("Default not used", getMockString("dummyId",
ErrorBundle.SUMMARY, Locale.US),
+                eb.getSummary(Locale.US, "defaultText"));
+        assertEquals("Default not used", getMockString("dummyId",
ErrorBundle.DETAILS, Locale.US),
+                eb.getDetails(Locale.US, "defaultText"));
+    }
+
+    public void testWithArguments() {
+        String[] arguments = new String[]{"arg1", "arg2"};
+        ErrorBundle eb = new ErrorBundle("dummyId", arguments);
+        try {
+            eb.getText(Locale.US);
+            fail("Entry not found should cause error");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("No MessageProvider registered", 
mnfex.getMessage());
+        }
+        assertEquals("Default used", "defaultText arg1 arg2",
eb.getText(Locale.US, "defaultText {0} {1}"));
+
+        addMockProvider();
+
+        assertEquals("Normal use", getFormattedMockString("dummyId",
ErrorBundle.TEXT, arguments, Locale.US),
+                eb.getText(Locale.US));
+        assertEquals("Normal use", getFormattedMockString("dummyId",
ErrorBundle.TITLE, arguments, Locale.US),
+                eb.getTitle(Locale.US));
+        assertEquals("Normal use", getFormattedMockString("dummyId",
ErrorBundle.SUMMARY, arguments, Locale.US),
+                eb.getSummary(Locale.US));
+        assertEquals("Normal use", getFormattedMockString("dummyId",
ErrorBundle.DETAILS, arguments, Locale.US),
+                eb.getDetails(Locale.US));
+        assertEquals("Default not used", getFormattedMockString("dummyId",
ErrorBundle.TEXT, arguments, Locale.US),
+                eb.getText(Locale.US, "defaultText"));
+        assertEquals("Default not used", getFormattedMockString("dummyId",
ErrorBundle.TITLE, arguments, Locale.US),
+                eb.getTitle(Locale.US, "defaultText"));
+        assertEquals("Default not used", getFormattedMockString("dummyId",
ErrorBundle.SUMMARY, arguments, Locale.US),
+                eb.getSummary(Locale.US, "defaultText"));
+        assertEquals("Default not used", getFormattedMockString("dummyId",
ErrorBundle.DETAILS, arguments, Locale.US),
+                eb.getDetails(Locale.US, "defaultText"));
+    }
+}
\ No newline at end of file
Index: src/test/org/apache/commons/i18n/bundles/MessageBundleTest.java
===================================================================
--- src/test/org/apache/commons/i18n/bundles/MessageBundleTest.java
(revision 0)
+++ src/test/org/apache/commons/i18n/bundles/MessageBundleTest.java
(revision 0)
@@ -0,0 +1,72 @@
+/*
+ * Copyright 2005 The Apache Software Foundation
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.commons.i18n.bundles;
+
+import org.apache.commons.i18n.MockProviderTestBase;
+import org.apache.commons.i18n.MessageNotFoundException;
+
+import java.util.Locale;
+
+/**
+ * @author Mattias Jiderhamn
+ */
+public class MessageBundleTest extends MockProviderTestBase {
+    public void testWithoutArguments() {
+        MessageBundle mb = new MessageBundle("dummyId");
+        try {
+            mb.getText(Locale.US);
+            fail("Entry not found should cause error");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("No MessageProvider registered", 
mnfex.getMessage());
+        }
+        assertEquals("Default used", "defaultText", mb.getText(Locale.US,
"defaultText"));
+
+        addMockProvider();
+
+        assertEquals("Normal use", getMockString("dummyId",
MessageBundle.TEXT, Locale.US), mb.getText(Locale.US));
+        assertEquals("Normal use", getMockString("dummyId",
MessageBundle.TITLE, Locale.US), mb.getTitle(Locale.US));
+        assertEquals("Default not used", getMockString("dummyId",
MessageBundle.TEXT, Locale.US),
+                mb.getText(Locale.US, "defaultText"));
+        assertEquals("Default not used", getMockString("dummyId",
MessageBundle.TITLE, Locale.US),
+                mb.getTitle(Locale.US, "defaultText"));
+    }
+
+    public void testWithArguments() {
+        String[] arguments = new String[]{"arg1", "arg2"};
+        MessageBundle mb = new MessageBundle("dummyId", arguments);
+        try {
+            mb.getText(Locale.US);
+            fail("Entry not found should cause error");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("No MessageProvider registered", 
mnfex.getMessage());
+        }
+        assertEquals("Default used", "defaultText arg1 arg2",
mb.getText(Locale.US, "defaultText {0} {1}"));
+
+        addMockProvider();
+
+        assertEquals("Normal use", getFormattedMockString("dummyId",
MessageBundle.TEXT, arguments, Locale.US),
+                mb.getText(Locale.US));
+        assertEquals("Normal use", getFormattedMockString("dummyId",
MessageBundle.TITLE, arguments, Locale.US),
+                mb.getTitle(Locale.US));
+        assertEquals("Default not used", getFormattedMockString("dummyId",
MessageBundle.TEXT, arguments, Locale.US),
+                mb.getText(Locale.US, "defaultText"));
+        assertEquals("Default not used", getFormattedMockString("dummyId",
MessageBundle.TITLE, arguments, Locale.US),
+                mb.getTitle(Locale.US, "defaultText"));
+    }
+}
Index: src/test/org/apache/commons/i18n/bundles/TextBundleTest.java
===================================================================
--- src/test/org/apache/commons/i18n/bundles/TextBundleTest.java (revision 
0)
+++ src/test/org/apache/commons/i18n/bundles/TextBundleTest.java (revision 
0)
@@ -0,0 +1,65 @@
+/*
+ * Copyright 2005 The Apache Software Foundation
+ *
+ * Licensed under the Apache License, Version 2.0 (the "License");
+ * you may not use this file except in compliance with the License.
+ * You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ *
+ */
+package org.apache.commons.i18n.bundles;
+
+import org.apache.commons.i18n.MockProviderTestBase;
+import org.apache.commons.i18n.MessageNotFoundException;
+
+import java.util.Locale;
+
+/**
+ * @author Mattias Jiderhamn
+ */
+public class TextBundleTest extends MockProviderTestBase {
+    public void testWithoutArguments() {
+        TextBundle textBundle = new TextBundle("dummyId");
+        try {
+            textBundle.getText(Locale.US);
+            fail("Entry not found should cause error");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("No MessageProvider registered", 
mnfex.getMessage());
+        }
+        assertEquals("Default used", "defaultText",
textBundle.getText(Locale.US, "defaultText"));
+
+        addMockProvider();
+
+        assertEquals("Normal use", getMockString("dummyId",
TextBundle.TEXT, Locale.US), textBundle.getText(Locale.US));
+        assertEquals("Default not used", getMockString("dummyId",
TextBundle.TEXT, Locale.US),
+                textBundle.getText(Locale.US, "defaultText"));
+    }
+
+    public void testWithArguments() {
+        String[] arguments = new String[]{"arg1", "arg2"};
+        TextBundle textBundle = new TextBundle("dummyId", arguments);
+        try {
+            textBundle.getText(Locale.US);
+            fail("Entry not found should cause error");
+        }
+        catch(MessageNotFoundException mnfex) {
+            assertEquals("No MessageProvider registered", 
mnfex.getMessage());
+        }
+        assertEquals("Default used", "defaultText arg1 arg2",
textBundle.getText(Locale.US, "defaultText {0} {1}"));
+
+        addMockProvider();
+
+        assertEquals("Normal use", getFormattedMockString("dummyId",
TextBundle.TEXT, arguments, Locale.US),
+                textBundle.getText(Locale.US));
+        assertEquals("Default not used", getFormattedMockString("dummyId",
TextBundle.TEXT, arguments, Locale.US),
+                textBundle.getText(Locale.US, "defaultText"));
+    }
+}


---------------------------------------------------------------------
To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: commons-dev-help@jakarta.apache.org




---------------------------------------------------------------------
To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: commons-dev-help@jakarta.apache.org


Re: Taglibrary documentation

Posted by Wendy Smoak <ja...@wendysmoak.com>.
From: "Murray Collingwood" <mu...@focus-computing.com.au>

> I've read through most of this documentation and it doesn't help much.

I completely agree.  I've never been able to just read the taglib docs and
figure out what combination of attributes will do what I want.

> I'm really looking for a reference manual that:
> 1.  lists all of the available tags in a tag library,
> 2.  describes all of the parameters,
> 3.  how the parameters differ from each other and how they interact (you
> can't
>    always use two parameters together and sometimes if you have one you
> need a
>    second etc)
> 4.  and optionally including some examples of usage in everyday JSP pages
>
> Here's a good example from Hibernate:
> http://www.hibernate.org/hib_docs/v3/reference/en/html/mapping.html#mapping-declaration-class

That's *nice*.  We're getting there. :)  As part of the as-yet-unpublished
website reorganization, we now have JSP 1.2 tld files with embedded HTML
documentation.  These are being used to generate Tag Reference pages
http://svn.apache.org/builds/struts/maven/trunk/site-test/struts-taglib/tagreference-struts-html.html
and also Taglibdoc
http://svn.apache.org/builds/struts/maven/trunk/site-test/struts-taglib/tlddoc/index.html

Right now... it's the same documentation from the original xml source
documents, presented two different ways, so it's no great improvement.  And
really, no amount of writing about the attributes is going to help-- the
descriptions are technically correct.

I think examples are the only thing that will clear up the many mysteries of
what, exactly, happens when you use attributes in various combinations.  The
dtd for JSP 1.2 tag library descriptors includes an <example> tag that we
can now take advantage of.

But first, someone has to come up with the examples. :)  Patches are very
welcome-- the TLD files are under src/tld in the 'taglib' sub project.  If
you're interested in working on them, just ask on the dev list if you need
help getting started with Subversion, etc.

Alternately, start a Wiki page and I'd be more than happy to move the
examples into the TLDs as appropriate.  I'm not likely to have time to sit
down and come up with examples for all of these tags, but if a few people
get involved, it won't be long before we have them all done.

-- 
Wendy Smoak


---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@struts.apache.org
For additional commands, e-mail: user-help@struts.apache.org


Re: Automatic rejection

Posted by Rakesh <ra...@netcore.co.in>.
On Tue, 2004-11-02 at 18:54, Moussa Fall wrote:
> Thank you, Martin and Duncan!
> Sorry I did not mention this information. I am using RH9 with Postfix.
> Maybe I can use Mailscanner.

if you use MailScanner then you can specify in MailScanner configuration
to Discard the Spam Mails or simply store (quarantine) the message
instead of delivering the message.

> 
> On 2 Nov 2004 at 12:53, Martin Hepworth wrote:
> 
> > Moussa Fall wrote:
> > > Question from a newbie: can anyone point me to a location where I can find out to make 
> > > spamassassin automatically reject spam? I noticed that all tagged spam are really spams and 
> > > I do not want users to receive mail with scores, etc.
> > > 
> > > Thank you.
> > 
> > Hi
> > 
> > if you want to 'reject' the email you'll need to use milter with 
> > sendmail or something similir for your MTA (exim, postfix..)
> > 
> > If you want to accept all email then process before delivery you can use 
> > MailScanner or amavis-new - I use MailScanner.
> > 
> > or you could use procmail if you are on a *nix ermail server to process 
> > the emails upon deliver.
> > 
> > 
> > --
> > Martin Hepworth
> > Senior Systems Administrator
> > Solid State Logic Ltd
> > tel: +44 (0)1865 842300
> > 
> > 
> > **********************************************************************
> > 
> > This email and any files transmitted with it are confidential and
> > intended solely for the use of the individual or entity to whom they
> > are addressed. If you have received this email in error please notify
> > the system manager.
> > 
> > This footnote confirms that this email message has been swept
> > for the presence of computer viruses and is believed to be clean.
> > 
> > **********************************************************************
> > 
> 
> 


Re: SVG in applet problems

Posted by Claire WALL <K9...@atlas.kingston.ac.uk>.
The problems i'm having seem to do with security isues. is there 
something i have to do when trying to put svg into an applet, like 
set security properties or something?


On 13 Mar 2003, at 7:59, G. Wade Johnson wrote:

> I've been displaying SVG in a Batik-based applet for some time. Almost
> all of the information I needed was in this list. (Search the list for
> "Applet".) The things that weren't on the list when I started, are now.
> <grin/>
> 
> G. Wade
> 
> Claire Wall wrote:
> > 
> > Thanx :)
> > 
> > However, i looked all over the CVS and couldn't find
> > anything. Is there anybody out there who has got an
> > SVG graphic to display in an applet? If so I would
> > like to talk to you as I really need to get this
> > working and it would be easier and quicker to
> > correspond via YIM or something similiar.
> > 
> > Cheers
> > Claire
> > 
> > --- J Aaron Farr <ja...@yahoo.com> wrote: > On
> > Wed, 2003-03-12 at 07:36, Claire WALL wrote:
> > > > I am using Batik 1.5.
> > > >
> > > > I am fairly new to Batik. What is the CVS?
> > >
> > > CVS = Concurrent Versions System
> > >
> > > The latest code the batik team is currently working
> > > on is hosted in a
> > > CVS system.  Therefore, the problems you faced may
> > > be bugs in batik
> > > which have since been fixed in the CVS hosted code,
> > > but not yet
> > > released.
> > >
> > > You can find out more about CVS at:
> > > http://cvshome.org
> > >
> > > To find out more about the Apache XML Group's CVS
> > > accounts see:
> > > http://xml.apache.org/cvs.html
> > >
> > > To browse the Batik CVS repository in your web
> > > browser see:
> > > http://cvs.apache.org/viewcvs.cgi/xml-batik/
> > >
> > > The above links have directions on how to check out
> > > the latest source
> > > code from CVS to your local PC.
> > >
> > > --
> > >   jaaron    <ja...@yahoo.com>
> > >
> > >
> > >
> > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail:
> > > batik-users-unsubscribe@xml.apache.org
> > > For additional commands, e-mail:
> > > batik-users-help@xml.apache.org
> > >
> > >
> > >
> > 
> > __________________________________________________
> > Do You Yahoo!?
> > Everything you'll ever need on one web page
> > from News and Sport to Email and Music Charts
> > http://uk.my.yahoo.com
> > 
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: batik-users-unsubscribe@xml.apache.org
> > For additional commands, e-mail: batik-users-help@xml.apache.org
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: batik-users-unsubscribe@xml.apache.org
> For additional commands, e-mail: batik-users-help@xml.apache.org
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: batik-users-unsubscribe@xml.apache.org
For additional commands, e-mail: batik-users-help@xml.apache.org


Re: SVG in applet problems

Posted by "G. Wade Johnson" <wa...@abbnm.com>.
I've been displaying SVG in a Batik-based applet for some time. Almost
all of the information I needed was in this list. (Search the list for
"Applet".) The things that weren't on the list when I started, are now.
<grin/>

G. Wade

Claire Wall wrote:
> 
> Thanx :)
> 
> However, i looked all over the CVS and couldn't find
> anything. Is there anybody out there who has got an
> SVG graphic to display in an applet? If so I would
> like to talk to you as I really need to get this
> working and it would be easier and quicker to
> correspond via YIM or something similiar.
> 
> Cheers
> Claire
> 
> --- J Aaron Farr <ja...@yahoo.com> wrote: > On
> Wed, 2003-03-12 at 07:36, Claire WALL wrote:
> > > I am using Batik 1.5.
> > >
> > > I am fairly new to Batik. What is the CVS?
> >
> > CVS = Concurrent Versions System
> >
> > The latest code the batik team is currently working
> > on is hosted in a
> > CVS system.  Therefore, the problems you faced may
> > be bugs in batik
> > which have since been fixed in the CVS hosted code,
> > but not yet
> > released.
> >
> > You can find out more about CVS at:
> > http://cvshome.org
> >
> > To find out more about the Apache XML Group's CVS
> > accounts see:
> > http://xml.apache.org/cvs.html
> >
> > To browse the Batik CVS repository in your web
> > browser see:
> > http://cvs.apache.org/viewcvs.cgi/xml-batik/
> >
> > The above links have directions on how to check out
> > the latest source
> > code from CVS to your local PC.
> >
> > --
> >   jaaron    <ja...@yahoo.com>
> >
> >
> >
> ---------------------------------------------------------------------
> > To unsubscribe, e-mail:
> > batik-users-unsubscribe@xml.apache.org
> > For additional commands, e-mail:
> > batik-users-help@xml.apache.org
> >
> >
> >
> 
> __________________________________________________
> Do You Yahoo!?
> Everything you'll ever need on one web page
> from News and Sport to Email and Music Charts
> http://uk.my.yahoo.com
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: batik-users-unsubscribe@xml.apache.org
> For additional commands, e-mail: batik-users-help@xml.apache.org

---------------------------------------------------------------------
To unsubscribe, e-mail: batik-users-unsubscribe@xml.apache.org
For additional commands, e-mail: batik-users-help@xml.apache.org


RE: SVG in applet problems

Posted by Claire Wall <re...@yahoo.co.uk>.
Thanx :)

However, i looked all over the CVS and couldn't find
anything. Is there anybody out there who has got an
SVG graphic to display in an applet? If so I would
like to talk to you as I really need to get this
working and it would be easier and quicker to
correspond via YIM or something similiar.

Cheers
Claire


--- J Aaron Farr <ja...@yahoo.com> wrote: > On
Wed, 2003-03-12 at 07:36, Claire WALL wrote:
> > I am using Batik 1.5. 
> > 
> > I am fairly new to Batik. What is the CVS? 
> 
> CVS = Concurrent Versions System
> 
> The latest code the batik team is currently working
> on is hosted in a
> CVS system.  Therefore, the problems you faced may
> be bugs in batik
> which have since been fixed in the CVS hosted code,
> but not yet
> released.
> 
> You can find out more about CVS at: 
> http://cvshome.org
> 
> To find out more about the Apache XML Group's CVS
> accounts see:
> http://xml.apache.org/cvs.html
> 
> To browse the Batik CVS repository in your web
> browser see:
> http://cvs.apache.org/viewcvs.cgi/xml-batik/
> 
> The above links have directions on how to check out
> the latest source
> code from CVS to your local PC.
> 
> -- 
>   jaaron    <ja...@yahoo.com>
> 
> 
>
---------------------------------------------------------------------
> To unsubscribe, e-mail:
> batik-users-unsubscribe@xml.apache.org
> For additional commands, e-mail:
> batik-users-help@xml.apache.org
> 
> 
>  

__________________________________________________
Do You Yahoo!?
Everything you'll ever need on one web page
from News and Sport to Email and Music Charts
http://uk.my.yahoo.com

---------------------------------------------------------------------
To unsubscribe, e-mail: batik-users-unsubscribe@xml.apache.org
For additional commands, e-mail: batik-users-help@xml.apache.org


RE: SVG in applet problems

Posted by J Aaron Farr <ja...@yahoo.com>.
On Wed, 2003-03-12 at 07:36, Claire WALL wrote:
> I am using Batik 1.5. 
> 
> I am fairly new to Batik. What is the CVS? 

CVS = Concurrent Versions System

The latest code the batik team is currently working on is hosted in a
CVS system.  Therefore, the problems you faced may be bugs in batik
which have since been fixed in the CVS hosted code, but not yet
released.

You can find out more about CVS at:  http://cvshome.org

To find out more about the Apache XML Group's CVS accounts see:
http://xml.apache.org/cvs.html

To browse the Batik CVS repository in your web browser see:
http://cvs.apache.org/viewcvs.cgi/xml-batik/

The above links have directions on how to check out the latest source
code from CVS to your local PC.

-- 
  jaaron    <ja...@yahoo.com>


---------------------------------------------------------------------
To unsubscribe, e-mail: batik-users-unsubscribe@xml.apache.org
For additional commands, e-mail: batik-users-help@xml.apache.org


Re: Modified test tree

Posted by Mike Stover <ms...@apache.org>.
On 4 Feb 2003 at 21:13, Oliver Rossmueller wrote:

> [snip]
> The factory is not the key point, the key point is that I would like to 
> have a clean separation of GUI and test elements. Then for example I 
> could create a command line tool to record test plans and use the same 
> functionality to create test elements that is used by the GUI. And I 
> could run this recorder on a linux box without X-Windows because the GUI 
> would not be used in that case. This is not possible at the moment.
> 

Well, I finally understand why this is a problem for you.  So, I started a wiki page about it.  
http://nagoya.apache.org/wiki/apachewiki.cgi?JMeterGuiTestElementSeparation

-Mike

---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: Modified test tree

Posted by Oliver Rossmueller <ol...@tuxerra.com>.
Mike Stover wrote:
> On 4 Feb 2003 at 0:15, Oliver Rossmueller wrote:
> 
> 
>>No, one factory to create all of them. In fact it is just 
>>Class.forName(elementClassName).newInstance(). Filling the properties 
>>with default values should not be done by the GUI but in the element's 
>>constructor. So the only property the test element constructor can not 
>>set is the gui class name. But the test element should not know about 
>>GUIs anyway so I don't see the need to hold the name of the 
>>corresponding GUI class in the test element's properties. There are 
>>other ways to assign GUIs to test elements (e.g. ask the GUI which kind 
>>of test elements it can handle and then select the right one) and to 
>>keep separation of concerns. The test elements are the model and should 
>>not know about the very existence of a GUI.
> 
> 
> Couple things.  One, I'd like to avoid a 1:1 mapping of testelements to gui classes.  
> Right now, 5 guis can serve the same test element, and that's a nice feature.  It 
> means I can make a different gui for the HTTP Sampler if I want, and the only thing 
> I have to write is the GUI - I don't have to make a new copy of the HTTPSampler 
> class.

I think you are talking about the request defaults elements which are 
all instances of ConfigTestElement but use different GUIs. I don't know 
why it is done this way - I thought this was some kind of copy&paste bug 
- because there are special classes in the code base for all the 
defaults but they are not used. I don't see any use of having two 
different GUIs for HTTPSampler to stay with your example. And how will 
JMeter decide which one to use?

> Two, related to that is that the GUI's are the authorities on how to initiate test 
> element's and their data - because they fully control the data that goes into them 
> anyway.

Well, the GUI fill the test elements with data, but the test elements 
have to control if the values set by the GUI are valid and correct. They 
don't do this at the moment but IMHO they have to because they rely on 
correct data when executing the tests. And there is another way to fill 
test elements with data by loading a jmx file, and this should not go 
through the GUI.

> Three, the gui class is saved in the test element so that when test plans are 
> reloaded, JMeter knows which gui goes with it.  Otherwise, you need that 1:1 
> mapping to figure it out, plus that would be slower, as JMeter would have to search 
> through all the gui classes to find the right one.

It is not hard to create a map holding the required information e.g. 
when searching for the classes at startup.

> As far as I'm concerned, the separation of concerns is not violated by the 
> testelement holding gui related data.  As long as the code is completely ignorant, 
> I'm not worried about what data the gui's stuff in them.

Ok.

> I noticed on some other wiki site developers were maintaining a discussion by 
> writing pro-con lists.  I'd like to see your pro-con thoughts on making this factory.  
> What would be the advantages?  What would be the disadvantages?

The factory is not the key point, the key point is that I would like to 
have a clean separation of GUI and test elements. Then for example I 
could create a command line tool to record test plans and use the same 
functionality to create test elements that is used by the GUI. And I 
could run this recorder on a linux box without X-Windows because the GUI 
would not be used in that case. This is not possible at the moment.

Anyway, I will stop this discussion now. I see no way to convince you 
that a clean separation of GUI and model is an advantage and you will 
not convice me that it is good as it is now. As nobody else seems 
interested in this topic I'll stop wasting our time.

Oliver


> 
> -Mike
> 
> 
>>Oliver
>>
>>
>>>
>>>
>>>>Oliver
>>>>
>>>>
>>>>
>>>>
>>>>>--
>>>>>Michael Stover
>>>>>mstover1@apache.org
>>>>>Yahoo IM: mstover_ya
>>>>>ICQ: 152975688
>>>>>AIM: mstover777
>>>>>
>>>>>---------------------------------------------------------------------
>>>>>To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
>>>>>For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
>>>>>
>>>>>
>>>>
>>>>
>>>
>>>
>>>--
>>>Michael Stover
>>>mstover1@apache.org
>>>Yahoo IM: mstover_ya
>>>ICQ: 152975688
>>>AIM: mstover777
>>>
>>>---------------------------------------------------------------------
>>>To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
>>>For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
>>>
>>>
>>
>>
> 
> 
> 
> --
> Michael Stover
> mstover1@apache.org
> Yahoo IM: mstover_ya
> ICQ: 152975688
> AIM: mstover777
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 
> 


Re: repos not accessible after log files removed (II)

Posted by Philip Martin <ph...@codematters.co.uk>.
"Jan Hendrik" <ja...@bigfoot.com> writes:

> To complete this barely noticed thread at least for my own records:

I guess you got so little response because nobody knows what to suggest.

> After another recover I could dump the corrupt repos without trouble 
> and load it into a virginal .30 repos. Doing a verify on the repos also 
> showed no troubles. However, after a few commits and updates it 
> was business as usual: no longer accessable. I had not even 
> removed the unused logfiles at that point, something I supposed to 
> be the cause so far.

Your initial claim, that removing the log files made the repository
inaccessible, sounded extremely unlikely.  As far as I can see you
have now discounted that, and you are left with a simple "Subversion
doesn't work on my Windows platform".

> It would be interesting though to hear of this belongs to the typical 
> things one has to "accept" when using basically Unix apps as 
> Berkeley, Apache, Subversion in their Windows ports. (It's a small 
> peer-to-peer LAN here, all W2K SP2.)

As far as I know other people are using Subversion and Apache on
Windows, but I don't do it myself so I can't really help. If you
describe your system and setup more fully it is possible someone will
recognise what you are doing wrong.

>> > C:\>svn ls http://dim4300/svn/repos/trunk/internet/Marine
>> > svn: RA layer request failed
>> > svn: PROPFIND request failed on '/svn/repos/trunk/internet/Marine'
>> > svn: PROPFIND of '/svn/repos/trunk/internet/Marine': could not
>> > connect to server
>> >  (http://dim4300)

That simply shows that the client cannot communicate with the server,
you will get the same error if Apache is not running.  To get more
information you will need to debug the server (or possibly the client
over ra_local).

-- 
Philip Martin

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Timestamp Issues

Posted by Christopher Ness <ch...@nesser.org>.
Before we get to the meat, I don't know if you saw this in the client
config (note client, not server) or not [~/.subversion/config]:

   ### Set use-commit-times to make checkout/update/switch/revert
   ### put last-committed timestamps on every file touched.
   # use-commit-times = yes

May be useful for you to lock last changed timestamp to commit times.

On Mon, 2004-29-11 at 21:55 -0500, trlists@clayst.com wrote:
> On 29 Nov 2004 Christopher Ness wrote:
> > Do you _have_ to use that tool or can you do some scripting in SVN to
> > deploy changes into production? 
> 
> Well yes, but I'd rather not!  I'm also used to simply observing the  
> timestamps (as in "well, I probably don't have to check there to see if 
> that's where the newly created problem is, because that file hasn't 
> changed since last July").  I shouldn't be reworking my whole work 
> environment to meet the needs of the VCS -- it should be a tool, not a 
> framework that defines the development setup (though these days, many 
> of the tools behave like wannbe frameworks! :-)).

I like simple things too, I like them a lot!  But now that I'm a little
more comfortable with subversion and *nix, these things are becoming
more trivial to pin down.

To comment on your example above about someone breaking the build:

For example if someone committed a file that broke something in the past
- lets say day - on the subversion server you just need to ask for the
logs for those times.  You can do this example for yourself by checking
out the trunk of the SVN project.  

Of course you could use this method to find all the changed files in
July, but there could likely be duplicates, use `sort` and `uniq` to get
rid of those. 

WARNING:  Someone may have a better method of discovering the files that
changed between two time periods.  Please speak up as I'd like to know
too!  Windows users can get a GNU/bash environment apparently.

[nesscg@woman trunk]$ svn info Path: .
URL: http://svn.collab.net/repos/svn/trunk
Repository UUID: 65390229-12b7-0310-b90b-f21a5aa7ec8e
Revision: 12092

<snip> Below might be interesting to you</snip>

Last Changed Date: 2004-11-29 20:56:04 -0500 (Mon, 29 Nov 2004)
Properties Last Updated: 2004-09-15 13:26:31 -0400 (Wed, 15 Sep 2004)

[nesscg@woman trunk]$ svn diff -r{2004-11-28}:{"2004-11-29 22:30"} |
grep "^Index"
Index: build.conf
Index: www/project_links.html
Index: notes/fs_dumprestore.txt
Index: subversion/include/svn_ctype.h
Index: subversion/include/svn_utf.h

<snip> More file paths are output here </snip>

Index: doc/translations/spanish/book/ch04.xml
Index: doc/translations/spanish/glosario_traduccion
Index: doc/book/TODO
[nesscg@woman trunk]$

Now you have a hitlist to look through for the buggy code.  Some you can
probably ignore right away.  For example if the problem wasn't in the
book source.

> The FTP client isn't a "middle man" it's a tool appropriate for the 
> job.  One of the things I like about Subversion is that it doesn't try 
> to be something more than a VCS -- but I wish (based on admittedly very 
> limited experience) that it didn't make quite so many assumptions about 
> how VCS users operate.  Of course it has to make some -- I just want 
> the ones that don't work for *me* left out :-).

Now you need to get the changed files onto the production server.  I'm
going to leave that as a exercise for the reader as there are many ways
of doing that task.

As an aside:

I find SVN is very much a Unix based product in that you should be able
to take the output from it and pipe it into another program to get some
work done.  I like that ability.

Have fun!
Chris

Re: Status of meta-data-versioning (mod time)?

Posted by "Ph. Marek" <ph...@bmlv.gv.at>.
On Monday 11 July 2005 15:21, Ben Collins-Sussman wrote:
> On Jul 11, 2005, at 5:47 AM, Oliver Betz wrote:
> > In their world, every project seems to be started under version
> > control. Well, that's not the real world.
>
> Not true at all.  Rather, we expect that 90% of the time, projects
> will be 'svn import'ed into subversion.
>
> Once the project is in subversion's repository, tell me why you still
> care about the original timestamps.  I'd like to know.  The answer I
> always hear is, "I have a whole bunch of scripts that depend on
> timestamps to perform copy synchronizations!"... to which I'm not
> sympathetic.
Well, others just have to do things with files which don't involve 
subversion ... maybe sending and receiving from third parties (where 
timestamps are a simpler [and for big files faster] way to check than 
hashes), maybe there's some medium or endpoint involved where it's not 
possible to do a full subversion chain (think mirrors and svk) ...

Not everyone has complete control over the working data.


Regards,

Phil

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Status of meta-data-versioning (mod time)?

Posted by Ryan Schmidt <su...@ryandesign.com>.
On 11.07.2005, at 15:21, Ben Collins-Sussman wrote:

> On Jul 11, 2005, at 5:47 AM, Oliver Betz wrote:
>
>> In their world, every project seems to be started under version
>> control. Well, that's not the real world.
>
> Not true at all.  Rather, we expect that 90% of the time, projects  
> will be 'svn import'ed into subversion.
>
> Once the project is in subversion's repository, tell me why you  
> still care about the original timestamps.  I'd like to know.  The  
> answer I always hear is, "I have a whole bunch of scripts that  
> depend on timestamps to perform copy synchronizations!"... to which  
> I'm not sympathetic.

How about "I'd like to know, before I imported this into Subversion,  
when the last time was that I edited some file."



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Status of meta-data-versioning (mod time)?

Posted by Oliver Betz <li...@gmx.net>.
Dirk Schenkewitz <sc...@docomolab-euro.com> wrote:

> >>- People, including me, want to know "when was the last change to
> >>   that file?", even if the file was laying around for some time
> >>   (months/years!) before being put under subversion control. This
> >>   problem cannot be solved using --use-commit-times.
> > 
> > It can if you make one commit per file on initial import. Slow and
> > ugly, but it works tolerably.
> 
> No, it does not work at all. "use-commit-times = yes" sets the mtime
> of a checked out file to the time of the commit, the original mtime is
> lost when the next one does a checkout that creates this file. I just

right, but the commit shouldn't be so long past the modification, so 
you get approximately the time when the file was modified last.

My problem was to keep the timestamp on initial import of "legacy" 
projects.

BTW: I repeat my suggestion that a commit using "use-commit-times" 
should "touch" the files so that the wc has the same timestamp as the 
commit time.

> tested, with "use-commit-times = yes":

[example snipped]

> If you do commits for each single file at a time, you can keep the
> *order* of the original mtimes, that's all.

in your example, you added the file without tweaking the svn:date 
property of the commit.

[my "adjust commit time" hack]

> If I may I ask: How do you do that? Maybe it is your hack that would

see below. Comments welcom, I'm not a Perl monk.

Oliver

- snip -

#!/usr/bin/perl
# import files to subversion one by one and adjust commit date to file's mtime
# this method is extremly slow!
# directory timestamps are not kept ad commits are not in chronological order

# 2005-07-11 Oliver Betz

# caution: stat() fails on filenames with foreign characters.
# maybe conflicting translations from svn and perl?

use strict;

use HTTP::Date qw(time2isoz);
use File::stat; # by-name access to mtime

my $rev;        # revision of committed file
my @svnstat;    # list of files
my $filename;   # path to current file
my $mtime;      # mtime of current file
my $more = 1;   # process more directories

$ENV{Lang}="C"; # else we might get localized responses

while ($more){
  $more = 0; # assume we had nothing more to do
  @svnstat = split /\n/, `svn stat`; # get all (not ignored) files
  foreach (@svnstat) {
    next unless $_ =~ /^\?.....\s+(\S.+)$/; # use only files not yet under version control
    $filename = $1;
    if (-d $filename) {$more = 1}; # we add another directory -> repeat loop
    $mtime = (stat($filename))->mtime;

    print `svn add -N "$filename"`; # put under svn control
    $rev=`svn ci -m "mtime keeping add of $filename"`; # commit this file immediately
    print "$rev"; # complete response (several lines)
    die "wrong response $rev" unless $rev =~ /Committed revision (\d+)\./;
    $rev = $1; # numerical value -> $rev
    $mtime = time2isoz($mtime);
    $mtime =~ s/\s/T/; # special format of svn time: 2005-07-11T09:17:35.000000Z
    $mtime =~ s/Z/.000000Z/; # svn time has us resolution
    `svn propset svn:date $mtime --revprop -r $rev`;
  };
}
print "ready\n";

__END__


-- 
Oliver Betz, Muenchen


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Status of meta-data-versioning (mod time)?

Posted by Dirk Schenkewitz <sc...@docomolab-euro.com>.
Oliver Betz wrote:
> Dirk Schenkewitz wrote:
> ...
>>- People, including me, want to know "when was the last change to
>>   that file?", even if the file was laying around for some time
>>   (months/years!) before being put under subversion control. This
>>   problem cannot be solved using --use-commit-times.
> 
> 
> It can if you make one commit per file on initial import. Slow and 
> ugly, but it works tolerably.

No, it does not work at all. "use-commit-times = yes" sets the mtime
of a checked out file to the time of the commit, the original mtime
is lost when the next one does a checkout that creates this file.
I just tested, with "use-commit-times = yes":
---------------------------------------------
nobody:/B/test1/trunk> cp -av /usr/src/daemontools-0.76/src/CHANGES .
nobody:/B/test1/trunk> svn add CHANGES
A         CHANGES
nobody:/B/test1/trunk> svn ci
Adding         trunk/CHANGES
Transmitting file data .
Committed revision 7.
nobody:/B/test1/trunk> svn up
At revision 7.
nobody:/B/test1/trunk> ll -tr
...
-rw-r--r--  1 nobody nobody 3361 2001-07-12 18:49 CHANGES
-rw-r--r--  1 nobody nobody    0 2005-05-20 15:52 1
-rw-r--r--  1 nobody nobody    0 2005-06-02 17:40 2
...
# Note that the date is still "2001-07-12",
# 'svn up' did not change that.
nobody:/B/test1/trunk> cd ../..
nobody:/B> svn co svn://server/_test_ test2
...
Checked out revision 7.
nobody:/B> cd test2/trunk/
nobody:/B/test2/trunk> ll -tr
...
-rw-r--r--  1 nobody nobody    0 2005-05-20 15:52 1
-rw-r--r--  1 nobody nobody    0 2005-06-02 17:40 2
-rw-r--r--  1 nobody nobody 3361 2005-07-12 19:14 CHANGES
...
# Now "CHANGES" is not the oldest file, it is the youngest,
# the mtime is "2005-07-12", just by chance this is exactly
# 4 years later :)
---------------------------------------------

If you do commits for each single file at a time, you can keep the
*order* of the original mtimes, that's all.

>>Since Oliver's problem is the nonexistence of a windows build of
>>subversion with the meta-data patches, there might be a solution:
> 
> 
> not only. If Phil Marek's solution remains a patch, it will always
> be some effort to make it working in a new version. I would use the 
> solution only if it is in the main version.

Oh. I thought I had a good idea.

> If not, and if I needed the timestamp, I likely would use another 
> method, e.g. hook scripts saving the mtime in a file property. But 
> at the moment, I can live with my "adjust commit time" hack.

If I may I ask: How do you do that? Maybe it is your hack that would
solve my problem of (not wanting to but anyway) loosing the original
mtime.

"use-commit-times = yes" on its own just turned out to be less
useful than I thought.

Best regards
   Dirk

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Status of meta-data-versioning (mod time)?

Posted by Oliver Betz <li...@gmx.net>.
Dirk Schenkewitz wrote:

[...]

> - These informations are stored and presented by every unix/linux
>    filesystem. Not storing them renders this part of the filesystem
>    useless. Why did the developers of the filesystems take the effort

although I wouldn't express it this way ("useless"), it's IMO indeed 
one of the most important metadata properties of a file.

Reading the announcement at http://subversion.tigris.org/ 
"Directories, renames, and file meta-data are versioned" I expected 
that at least the modification time of a file is saved, not only the 
"execute" flag (which isn't even supported by each platform).

Not doing so should be stated at least in the FAQ.

>    to implement it? Because there is some use to it, whatever it may
>    be, perhaps something I never thought of.
> - People, including me, want to know "when was the last change to
>    that file?", even if the file was laying around for some time
>    (months/years!) before being put under subversion control. This

ack. Especially if there is some relation from "pre svn" times to 
other "non svn" versions of the file. Or other situations where files 
are related to external sources.

>    problem cannot be solved using --use-commit-times.

It can if you make one commit per file on initial import. Slow and 
ugly, but it works tolerably.

> In general, when putting something under a VCS, I want to loose
> *as little information as possible*.

I agree.

> Since Oliver's problem is the nonexistence of a windows build of
> subversion with the meta-data patches, there might be a solution:

not only. If Phil Marek's solution remains a patch, it will always be 
some effort to make it working in a new version. I would use the 
solution only if it is in the main version.

If not, and if I needed the timestamp, I likely would use another 
method, e.g. hook scripts saving the mtime in a file property. But at 
the moment, I can live with my "adjust commit time" hack.

Oliver
-- 
Oliver Betz, Muenchen


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Status of meta-data-versioning (mod time)?

Posted by Scott Palmer <sc...@2connected.org>.
On 12-Jul-05, at 11:48 AM, Ben Collins-Sussman wrote:

>
> On Jul 12, 2005, at 7:38 AM, Oliver Betz wrote:
>
>>
>> BTW: was there a good reason to make "use-commit-times" a global
>> setting?
>
> No, that was a mistake.  It should have been a commandline option.

Why not both?  It might make sense to generalize the concept.  Let  
any boolean command line option be specified in the global config so  
the user can control the defaults.

Scott


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Status of meta-data-versioning (mod time)?

Posted by Ben Collins-Sussman <su...@collab.net>.
On Jul 12, 2005, at 7:38 AM, Oliver Betz wrote:
>
> BTW: was there a good reason to make "use-commit-times" a global
> setting?
>

No, that was a mistake.  It should have been a commandline option.


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Status of meta-data-versioning (mod time)?

Posted by Oliver Betz <li...@gmx.net>.
Branko Čibej wrote:

['make' and update to older revision with 'use-commit-times']

> If you preserve commit times in the WC, then after the update to an
> older revision, some files that need rebuilding will have changed, but
> make won't notice because their timestamps would be older than the
> timestamps of the generated files left over from the previous build.
> Such an update would force you ro do a clean rebuild, and that's not
> nice.

In many cases it doesn't happen so often to "update" to an earlier 
version, and in many cases it doesn't take so much time to rebuild 
all (faster than a Doxygen run), so for many people that's no problem 
at all.

After all, one can choose whether to use commit times (or original 
modification time) or the actual time.

As far as I could read from the older threads, that's the main 
(only?) argument against "use-commit-times" (or keeping original 
timestamps), am I missing something?

BTW: was there a good reason to make "use-commit-times" a global 
setting?

Oliver
-- 
Oliver Betz, Muenchen


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org


Re: Status of meta-data-versioning (mod time)?

Posted by Saulius Grazulis <gr...@akl.lt>.
On Tuesday 12 July 2005 14:49, Branko Čibej wrote:

> > - 'make' uses them. If the right order is preserved,
> ..
> I've heard this argument many times, and it's simply *wrong*. Imagine
> this scenario:
>
>     * update to HEAD
>     * make clean all
>     * update to some older revision
>     * make all

Well, I guess your scenario contains a mistake:

    * update to some older revision
    * make clean all

will funcion as intended, provided you makefiles are correct. Given that 
downgrading a WC is not done frequently, 'make clean all' is not a big deal.

Make is not a panacea, but in this case one can use make to get reliable 
results.

> If you preserve commit times in the WC, then after the update to an
> older revision, some files that need rebuilding will have changed, but
> make won't notice because their timestamps would be older than the
> timestamps of the generated files left over from the previous build.
> Such an update would force you ro do a clean rebuild, and that's not nice.
>
> The argument assumes that generated files are under version control,
> too, but that's usually not the case.

This depends on the working style and on te working setup. I do admit that in 
many _software development_ projects only sources are kept in the repository, 
updates are often, and chekout times are just the right thing.

What makes a difference are projects that need to version generated files, or 
that need to preserve true modification times for some reason.

-- 
Saulius Gražulis

Visuomeninė organizacija "Atviras Kodas Lietuvai"
P.Vileišio g. 18
LT-10306 Vilnius
Lietuva (Lithuania)

tel/fax:      (+370-5)-210 40 05
mobilus:      (+370-684)-49802, (+370-614)-36366

Re: Status of meta-data-versioning (mod time)?

Posted by Branko Čibej <br...@xbc.nu>.
Dirk Schenkewitz wrote:

> - 'make' uses them. If the right order is preserved, this could also
>   be solved using --use-commit-times, but right now I believe that
>   the right order is not preserved.

I've heard this argument many times, and it's simply *wrong*. Imagine 
this scenario:

    * update to HEAD
    * make clean all
    * update to some older revision
    * make all

If you preserve commit times in the WC, then after the update to an 
older revision, some files that need rebuilding will have changed, but 
make won't notice because their timestamps would be older than the 
timestamps of the generated files left over from the previous build. 
Such an update would force you ro do a clean rebuild, and that's not nice.

The argument assumes that generated files are under version control, 
too, but that's usually not the case.

-- Brane


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Status of meta-data-versioning (mod time)?

Posted by Dirk Schenkewitz <sc...@docomolab-euro.com>.
kfogel@collab.net wrote:
> Scott Palmer <sc...@2connected.org> writes:
> ...
>>In any case, I thought I had read on the list of a patch that has
>>already been made available.  So the cost appears relatively low.  If
>>the development team considers this a low priority (and I agree that
>>it is), it appears that the cost is in proper proportion to the
>>benefits.
> 
> 
> That's how I'd sum things up too.  Note that the real cost is the cost
> of evaluating that patch and the design behind it, which (so far) has
> looked pretty high.  The mere existence of the patch doesn't change
> that cost.

Is there any way to reduce these costs, I mean could I do something
about it?

What is meant by "evaluating that patch and the design behind it",
can you describe what is usually done (in a few sentences, it shouldn't
cost you a lot of time) or give a link to that information?

Reasons why I want to have it:
- Tar files store it. I want to be able to give some stuff to someone
   else in the form of a tar file. If I want to give him/her a new
   version, taken from a freshly checked out WC, then all timestamps
   are different from the old version, even though there are changes
   in a few files only (this could be solved using --use-commit-times,
   maybe, at least to some extend).
- 'make' uses them. If the right order is preserved, this could also
   be solved using --use-commit-times, but right now I believe that
   the right order is not preserved.
- These informations are stored and presented by every unix/linux
   filesystem. Not storing them renders this part of the filesystem
   useless. Why did the developers of the filesystems take the effort
   to implement it? Because there is some use to it, whatever it may
   be, perhaps something I never thought of.
- People, including me, want to know "when was the last change to
   that file?", even if the file was laying around for some time
   (months/years!) before being put under subversion control. This
   problem cannot be solved using --use-commit-times.

In general, when putting something under a VCS, I want to loose
*as little information as possible*.


Since Oliver's problem is the nonexistence of a windows build of
subversion with the meta-data patches, there might be a solution:

Branko, could we somehow convince you to build a windows build of
the meta-data version, after the problem with zlib got sorted out?
Please? :)

Have fun
   Dirk

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Status of meta-data-versioning (mod time)?

Posted by kf...@collab.net.
Scott Palmer <sc...@2connected.org> writes:
> I'm sorry, I choose my words poorly.  I should have said the
> developers seem to attach little value to the fact that other tools
> use time stamps.  Though it is funny that you bring this point up,
> since it was the phrase "I'm not sympathetic" which I reacted to in a
> similar way (I used it myself merely to point that out).  It also
> implies a certain attitude. (An unsympathetic one :-) )  Suggesting
> that the people making the request have not put sufficient thought
> into the request or effort into changing their work flow in the first
> place.  I don't think that is what is happening in this case.

Yeah, "unsympathetic" was a bit abrupt.  Of course you're right, Ben
didn't mean it as a putdown at all.

> In any case, I thought I had read on the list of a patch that has
> already been made available.  So the cost appears relatively low.  If
> the development team considers this a low priority (and I agree that
> it is), it appears that the cost is in proper proportion to the
> benefits.

That's how I'd sum things up too.  Note that the real cost is the cost
of evaluating that patch and the design behind it, which (so far) has
looked pretty high.  The mere existence of the patch doesn't change
that cost.

-K

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Status of meta-data-versioning (mod time)?

Posted by Scott Palmer <sc...@2connected.org>.
On 11-Jul-05, at 2:08 PM, kfogel@collab.net wrote:

> There's a certain attitude that comes up too frequently, when people
> propose features and the Subversion committers don't react with
> sufficient enthusiasm :-).  Here that thought was expressed in
>
>   "it seems the developers ignore the fact that other tools use  
> time stamps"
>
> We don't "ignore" much around here.  What we do is make cost/benefit
> analyses. ...[snip]
>
> So I'm not complaining about the substantive content of your mail.
> However, I and the other developers do not appreciate inaccurate
> accusations that we are "ignoring" arguments when we are instead
> acknowledging those arguments, engaging them critically, and weighing
> them against counterarguments.

I'm sorry, I choose my words poorly.  I should have said the  
developers seem to attach little value to the fact that other tools  
use time stamps.  Though it is funny that you bring this point up,  
since it was the phrase "I'm not sympathetic" which I reacted to in a  
similar way (I used it myself merely to point that out).  It also  
implies a certain attitude. (An unsympathetic one :-) )  Suggesting  
that the people making the request have not put sufficient thought  
into the request or effort into changing their work flow in the first  
place.  I don't think that is what is happening in this case.

But enough of that..  I don't think Ben intended to put down the  
users any more than I intended to put down the developers.  If I  
didn't think Subversion was an excellent tool as it is, I wouldn't be  
using it.  Our R&D dept. switched over to Subversion at the beginning  
of this year based on my pushing, and just a few weeks ago my boss  
mentioned "I'm so glad we switched to Subversion"

In any case, I thought I had read on the list of a patch that has  
already been made available.  So the cost appears relatively low.  If  
the development team considers this a low priority (and I agree that  
it is), it appears that the cost is in proper proportion to the  
benefits.

There was also a similar discussion that started off about Mac  
resource forks, and became a general idea to version "extended  
attributes".  It seemed to have promise, but I haven't heard anything  
about it for a long time.

Scott


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Status of meta-data-versioning (mod time)?

Posted by kf...@collab.net.
Scott Palmer <sc...@2connected.org> writes:
> If Subversion is going to throw away metadata that most other file
> operations (most copy, ftp, http file transfers for example)
> preserve, e.g. mod times,  then the onus is on it to justify that.
> So far the answer I always hear is, "always use subversion to track
> file versions"... to which I'm not sympathetic.  :-)
> 
> I have no desire or ability to force everyone and everything I
> exchange data with to use Subversion exclusively.  Mod times would
> work fine for tracking file versions, if tools like Subversion didn't
> break them (and insist they were useless in the first place because
> some other tool might break them as well).
> 
> There are plenty of existing tools and/or scripts out there that do
> their work based on time stamps.   Subversion itself argues that
> 'make' only works properly if the time stamps are just so.  It is the
> reason often quoted for NOT tracking the file mod times... and yet it
> seems the developers ignore the fact that other tools use time stamps
> in similar ways and work best when the mod time is accurately tracked.
> 
> Why should everyone change their workflow to accommodate the fact
> that subversion doesn't save meta-data?  That "whole bunch of
> scripts" may be difficult or impossible to change.  Filesystems keep
> track of mod times for a reason.  Why does subversion assume that
> reason goes away completely with a VCS in place?  The OS' filesystem
> is still there!  You don't assume 'make' is integrated with
> Subversion, why assume everything else must be?

There's a certain attitude that comes up too frequently, when people
propose features and the Subversion committers don't react with
sufficient enthusiasm :-).  Here that thought was expressed in

  "it seems the developers ignore the fact that other tools use time stamps"

We don't "ignore" much around here.  What we do is make cost/benefit
analyses.  We try to determine the cost of of a feature (including
implementation, maintenance of the code forever after, documentation,
and fielding future user questions about edge cases) and compare that
to the benefits (a new ability in Subversion, no longer having to
field user questions about the feature's lack, etc).

Sometimes we decide the benefit isn't worth the cost.  Such
determinations are always subject to future re-evaluation, of course.
Maybe someone will point out a benefit no one had thought of before,
or propose a clever implementation that reduces the cost.  Conclusions
are always tentative, in the sense that they're not meant to close off
further discussion.

So I'm not complaining about the substantive content of your mail.
However, I and the other developers do not appreciate inaccurate
accusations that we are "ignoring" arguments when we are instead
acknowledging those arguments, engaging them critically, and weighing
them against counterarguments.  I believe several developers have done
so in this thread (or in previous threads on the same topic, if not in
this exact thread).  Just because they didn't come to the conclusion
you wanted doesn't mean they ignored the arguments in favor of that
conclusion! :-)

Thanks,
-Karl

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Status of meta-data-versioning (mod time)?

Posted by Scott Palmer <sc...@2connected.org>.
On 11-Jul-05, at 9:21 AM, Ben Collins-Sussman wrote:

> Once the project is in subversion's repository, tell me why you  
> still care about the original timestamps.  I'd like to know.  The  
> answer I always hear is, "I have a whole bunch of scripts that  
> depend on timestamps to perform copy synchronizations!"... to which  
> I'm not sympathetic.

If Subversion is going to throw away metadata that most other file  
operations (most copy, ftp, http file transfers for example)  
preserve, e.g. mod times,  then the onus is on it to justify that.    
So far the answer I always hear is, "always use subversion to track  
file versions"... to which I'm not sympathetic.  :-)

I have no desire or ability to force everyone and everything I  
exchange data with to use Subversion exclusively.  Mod times would  
work fine for tracking file versions, if tools like Subversion didn't  
break them (and insist they were useless in the first place because  
some other tool might break them as well).

There are plenty of existing tools and/or scripts out there that do  
their work based on time stamps.   Subversion itself argues that  
'make' only works properly if the time stamps are just so.  It is the  
reason often quoted for NOT tracking the file mod times... and yet it  
seems the developers ignore the fact that other tools use time stamps  
in similar ways and work best when the mod time is accurately tracked.

Why should everyone change their workflow to accommodate the fact  
that subversion doesn't save meta-data?  That "whole bunch of  
scripts" may be difficult or impossible to change.  Filesystems keep  
track of mod times for a reason.  Why does subversion assume that  
reason goes away completely with a VCS in place?  The OS' filesystem  
is still there!  You don't assume 'make' is integrated with  
Subversion, why assume everything else must be?

Scott

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Status of meta-data-versioning (mod time)?

Posted by Saulius Grazulis <gr...@akl.lt>.
On Monday 11 July 2005 16:21, Ben Collins-Sussman wrote:

> Once the project is in subversion's repository, tell me why you still  
> care about the original timestamps.  I'd like to know.  The answer I  
> always hear is, "I have a whole bunch of scripts that depend on  
> timestamps to perform copy synchronizations!"... to which I'm not  
> sympathetic.

There is much more than just a "bunch of scripts" that relies on mtime -- it 
is The Make System ;).

Now, while many software dev. projects only keep sources in the repository, 
there are some software projects, as well as _non-software development_ 
projects that need to have make-built files versioned as well. There was a 
discussion about his some time ago, so I don't want to repeat it, just to 
point this out.

Now, obviously, if mtimes are messed up then 'make' gets confused, and so are 
the users. ;).

Time-stamp preservation is what helps me 90% of the time. I do use 'mod time' 
versioning from Marek's branch all the time. No go without it.

Still, there is a possibility of race condition leading incorrect make-builds 
when using genuine time-stamps. (I guess I posted once an example).

Even better solution for this purpose would probably be to ensure that commit 
times of the files preserve the order of mtimes (i.e. commit oldest files 
first, newest files last and stamp them accordingly). This would eliminate 
the race condition.

However, mtimes are more logical and also serve other peoples needs.

-- 
Saulius Gražulis

Visuomeninė organizacija "Atviras Kodas Lietuvai"
P.Vileišio g. 18
LT-10306 Vilnius
Lietuva (Lithuania)

tel/fax:      (+370-5)-210 40 05
mobilus:      (+370-684)-49802, (+370-614)-36366

Re: Status of meta-data-versioning (mod time)?

Posted by Oliver Betz <li...@gmx.net>.
Ben Collins-Sussman wrote:

> > In their world, every project seems to be started under version
> > control. Well, that's not the real world.
> 
> Not true at all.  Rather, we expect that 90% of the time, projects 
> will be 'svn import'ed into subversion.

in which state/age? My "embedded" industrial applications are rather 
long-lived, stuff from 1993 (long before I thought about using 
version control) is still maintained.

IMHO the timestamp is a rather meaningful property if there is no 
equivalent information in the revision log.

Well, with a small script I can (and will) add/commit each file at a 
time and tweak the commit time, this way the project history keeps 
similar information as if being under version control from the 
beginning. But it's a rather ugly (and slooow) hack.

> Once the project is in subversion's repository, tell me why you still 
> care about the original timestamps.  I'd like to know.  The answer I 

People gave already many reasons in past threads, for example 
handling files not being source text, files from external sources.


Another reason: As long as there is any connection to the "pre-svn" 
status of the project, the timestamp is a important indicator or 
helpful to find the connection.

For example, there might be more than one version (branch) from "pre-
svn" ages, those files can be identified/compared by their timestamp.


After modifying a file the first time under version control, the 
original timestamp is usually (!) no more important, since commit 
time is as good as mtime in most (not necessarily all) cases.


I also agree with others that svn should be able to "touch" the 
affected local files on commit so that the working copy has the same 
timestamp as the commited revision. This would cause "make" to 
compile the files again (as it would happen using substitutions), but 
one had at least identical working copies (if using use-commit-
times).

> always hear is, "I have a whole bunch of scripts that depend on 
> timestamps to perform copy synchronizations!"... to which I'm not 
> sympathetic.

Situations where this is really important may be rare but existing 
(files from external sources).

Oliver
-- 
Oliver Betz, Muenchen


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: Status of meta-data-versioning (mod time)?

Posted by Ben Collins-Sussman <su...@collab.net>.
On Jul 11, 2005, at 5:47 AM, Oliver Betz wrote:
>
> In their world, every project seems to be started under version
> control. Well, that's not the real world.

Not true at all.  Rather, we expect that 90% of the time, projects  
will be 'svn import'ed into subversion.

Once the project is in subversion's repository, tell me why you still  
care about the original timestamps.  I'd like to know.  The answer I  
always hear is, "I have a whole bunch of scripts that depend on  
timestamps to perform copy synchronizations!"... to which I'm not  
sympathetic.



---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: spamassassin setup

Posted by Logan Shaw <ls...@emitinc.com>.
On Thu, 14 Sep 2006, Dhaval Patel wrote:
>> SpamAssassin comes with a whole bunch of rules by default.
>> The best thing is to look at those rules and see what they're
>> doing.  There's probably real documentation somewhere, but
>> there is so much example code that you may not need it.
>
> I did not see much in the local.cf after a fresh installation. I went to
> http://www.yrex.com/spam/spamconfig.php to generate my config file.

SpamAssassin installs a whole bunch of rules files that it
references.  It may depend on the system, but on my machine,
they're in /usr/share/spamassassin.

>>> So to see if an ip or hostname is in the RBL it would make a request to the RBL servers
>>> on port 53 just like DNS queries?
>>
>> It's not just like regular DNS queries.  It *is* a regular DNS
>> query.  It doesn't go against any extra, third-party servers.
>> I believe SpamAssassin uses its own resolver code, but it
>> looks at /etc/resolv.conf just like anything else and uses
>> the nameserver (nameservers?) it finds in there.
>
> Thanks for the clear up. But one more question about this. If it users my DNS servers,
> how does it query the RBL servers and give them the hostname or ip?

It encodes them as paths within the DNS namespace.  This is
sort of a hack, but it works.

For example, suppose you decide to set up your own DNSBL,
and you decide you hate Google and Apple and you want to put
them on your blacklist.  Maybe you own the domain dhaval.org
and you decide to call your DNSBL the "dpbl" (for Dhaval Patel
Black-List).  To create this blacklist, you would simply create
two DNS entries[1]:

 	google.com.dpbl.dhaval.org. IN A 127.0.0.1
 	apple.com.dpbl.dhaval.org.  IN A 127.0.0.1

Now, let's say I want to check whether google.com is on
your blacklist.  I take the string "google.com" and I append
".dpbl.dhaval.org" onto it.  Then I do a regular DNS lookup to
see if "google.com.dpbl.dhaval.org" exists.  If I get a record
back, you've blacklisted them.  If I get back a reply that says
it's a non-existent domain, you haven't.  (The fact that the
address 127.0.0.1 is returned isn't really relevant, usually;
that's just there because a DNS "A" record has to include some
sort of address.)  To do an IP-address-based blacklist instead
of (textual) domain-based one, you can use a similar mechanism.
If I want to know if 10.20.30.40 is on your blacklist, I just
look up 10.20.30.40.dpbl.dhaval.org.

>From your Linux machine's point of view and from the point
of view of the caching nameserver at your ISP, there isn't
any difference at all between this and a regular DNS lookup.
The DNS server that does the lookup has to chase IN NS records
from the root servers all the way down the hierarchy to know
which servers to consult, but it always has to do that, even
if you are looking up www.cs.berkeley.edu or something.

   - Logan

[1]  Well, actually more than two.  You'd need the supporting
      entries like SOA and NS entries for each level of the
      DNS namespace.  But we only care about two of them.

Re: Secure SSL connections between Solr and ZooKeeper

Posted by Jan Høydahl <ja...@cominvent.com>.
This has been working for a few years already, but there is a lack of documentation, see https://issues.apache.org/jira/browse/SOLR-7889 and children. We are very happy for contributions to the documentation, in particular https://issues.apache.org/jira/browse/SOLR-7893 !

Jan


> 25. mar. 2022 kl. 04:43 skrev Sam Lee <sa...@yahoo.com.INVALID>:
> 
> I think I've found the way to connect SolrCloud to an external ZooKeeper
> ensemble via SSL.
> 
> By default, Solr does not use SSL to connect to ZooKeeper. So if the
> ZooKeeper configuration requires SSL for client connections, Solr will
> complain like this when it tries to connect to ZooKeeper:
> 
> --8<---------------cut here---------------start------------->8---
> WARN  - 2022-03-25 12:34:43.681; org.apache.zookeeper.ClientCnxn; Session 0x0 for sever localhost/127.0.0.1:2182, Closing socket connection. Attempting reconnect except it is a SessionExpiredException. => EndOfStreamException: Unable to read additional data from server sessionid 0x0, likely server has closed socket
> 	at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77)
> org.apache.zookeeper.ClientCnxn$EndOfStreamException: Unable to read additional data from server sessionid 0x0, likely server has closed socket
> 	at org.apache.zookeeper.ClientCnxnSocketNIO.doIO(ClientCnxnSocketNIO.java:77) ~[zookeeper-3.6.2.jar:3.6.2]
> 	at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:350) ~[zookeeper-3.6.2.jar:3.6.2]
> 	at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1275) ~[zookeeper-3.6.2.jar:3.6.2]
> --8<---------------cut here---------------end--------------->8---
> 
> On the ZooKeeper side, the corresponding log entry is something like
> this:
> 
> --8<---------------cut here---------------start------------->8---
> 2022-03-25 12:34:43,652 [myid:1] - ERROR [nioEventLoopGroup-4-2:NettyServerCnxnFactory$CertificateVerifier@448] - Unsuccessful handshake with session 0x0
> 2022-03-25 12:34:43,682 [myid:1] - WARN  [nioEventLoopGroup-4-2:NettyServerCnxnFactory$CnxnChannelHandler@284] - Exception caught
> io.netty.handler.codec.DecoderException: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 0000002d000000000000000000000000000075300000000000000000000000100000000000000000000000000000000000
> 	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:478)
> 	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276)
> 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> 	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
> 	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
> 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
> 	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
> 	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
> 	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
> 	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:719)
> 	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:655)
> 	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:581)
> 	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493)
> 	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
> 	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
> 	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
> 	at java.base/java.lang.Thread.run(Thread.java:829)
> Caused by: io.netty.handler.ssl.NotSslRecordException: not an SSL/TLS record: 0000002d000000000000000000000000000075300000000000000000000000100000000000000000000000000000000000
> 	at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1232)
> 	at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1300)
> 	at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:508)
> 	at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:447)
> 	... 17 more
> --8<---------------cut here---------------end--------------->8---
> 
> This error message indicates that ZooKeeper was expecting an SSL
> connection, but the client (i.e. Solr) was connecting without SSL.
> 
> The solution is to add the appropriate ZooKeeper Java properties. Notice
> that these are exactly the same properties needed by standalone
> ZooKeeper's 'zkServer.sh' and 'zkCli.sh' to connect to ZooKeeper via
> SSL [1] [2]. Add the following to bin/solr.in.sh:
> 
> --8<---------------cut here---------------start------------->8---
> SOLR_OPTS="$SOLR_OPTS
>    -Dzookeeper.client.secure=true
>    -Dzookeeper.clientCnxnSocket=org.apache.zookeeper.ClientCnxnSocketNetty
>    -Dzookeeper.ssl.keyStore.location=/path/to/zk-keystore.jks
>    -Dzookeeper.ssl.keyStore.password=thepassword
>    -Dzookeeper.ssl.trustStore.location=/path/to/zk-truststore.jks
>    -Dzookeeper.ssl.trustStore.password=thepassword"
> --8<---------------cut here---------------end--------------->8---
> 
> 
>  [1]: https://stackoverflow.com/questions/43930797/configuring-ssl-in-zookeeper
>  [2]: https://cwiki.apache.org/confluence/display/zookeeper/zookeeper+ssl+user+guide
>    (Note that this ^ webpage says, "There is currently no support for
>    SSL for the communication between ZooKeeper servers". That statement
>    is no longer correct. "Quorum TLS" is available from ZooKeeper 3.5.5
>    onwards).


Re: Unused classes

Posted by ms...@apache.org.
I have no trouble with those classes being deleted.

-Mike

On 22 Apr 2003 at 22:19, Jeremy Arnold wrote:

> Hello,
>     A couple more classes that appear to be unused.  Can anybody verify 
> that they definitely aren't used? (JMeter has a habit of magically 
> finding classes that aren't referenced anywhere.)
> 
>     If they aren't used, should they be deleted, or should they be 
> updated so they are used?
> 
> org.apache.jmeter.visualizers.BarVisualizer
> org.apache.jmeter.visualizers.TreeVisualizer  // The "View Results Tree" 
> visualizer is actually ViewResultsFullVisualizer
> org.apache.jmeter.visualizers.WindowedVisualizer
> 
> Jeremy
> 
> 
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 

Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688


---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Unused classes

Posted by Jeremy Arnold <je...@bigfoot.com>.
Hello,
    A couple more classes that appear to be unused.  Can anybody verify 
that they definitely aren't used? (JMeter has a habit of magically 
finding classes that aren't referenced anywhere.)

    If they aren't used, should they be deleted, or should they be 
updated so they are used?

org.apache.jmeter.visualizers.BarVisualizer
org.apache.jmeter.visualizers.TreeVisualizer  // The "View Results Tree" 
visualizer is actually ViewResultsFullVisualizer
org.apache.jmeter.visualizers.WindowedVisualizer

Jeremy




---------------------------------------------------------------------
To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org


Re: request appears to be processed twice with PDF::Create

Posted by Dermot Paikkos <de...@sciencephoto.com>.
Hi.

On 4 Aug 2004 at 14:20, Tom Schindl wrote:

> Oh. I see not a very heavy loaded service :-). Are you running the
> server on win32?

No slackware linux with Apache/1.3.26 (Unix) mod_perl/1.27

> > That besides, the above method didn't work (see mail to list 
> > "Accessing form multiples". I think you replied to it. Whenever I
> > tried to loop through the incoming list of
> > "user=dermot+paikkos&user=joe+blogs" I was only getting the first
> > user.
> 
> Although that's perl here's the explanation.
> 
> because $r->param( "key" ) can be used in 2 contexts
> (see perldoc -f wantarray). That's something very specific to
> perl which you maybe not familiar with when have used langs
> like java, c++, ...:
Ha yes, well no, perl is the only language I have tried. I am a 
sysadmin not a developer. This is more like a hobby.

Yes I have been stung by this in the past but with Regex. I think I 
was told i was calling it in the wrong context and I should use a 
slice.

> 1. List-Context    => returns list of values:
> ---------------------------------------------
> Example for list contexts:
> @vals = $r->param("keys")
> foreach( $r->param("keys") )
> @hash_slice{$r->param("keys")}
> 
> 2. Scalar-Context => returns first value of list:
> -------------------------------------------------
> $val = $r->param("keys")
> $hash{$r->param("keys")}
> 
> 
> > Still not sure why I can do 
> > 
> > my $r=Apache::Request->new(shift);
> > my @users;
> > foreach my $param ($r->param) {
> > 	push(@users,$r->param($param));
> > } 
> 
> here you are in list context.
> 
> @user{$apr->param("keys")} = ();
> map { $user{$_} = 0 } keys %user;

This looks tastey. I could learn something here. I think that is a 
slice....not too familar with map....better go back to the books. 
This is what was told to consider but to be honest I am not familiar 
with the hash slices or maps so I opted to stick the lot in an array 
as per your 2nd example.

> > and not:
> > 
> > my $r=Apache::Request->new(shift);
> > my %users;
> > foreach my $param ($r->parm) {
> > 	%users{$r->param($param) = 0;
> > }
> > 
> 
> That's not valid syntax at all if you meant $user{$r->param($param)} =
> 0 then you are here in scalar context.

Opps, I meant $users{$r->param($param)}. Not that it would have 
worked.

Sorry to labour on this. Better get off it now of the list grumble.
Thanx again Tom.
Dp.



-- 
Report problems: http://perl.apache.org/bugs/
Mail list info: http://perl.apache.org/maillist/modperl.html
List etiquette: http://perl.apache.org/maillist/email-etiquette.html


Re: request appears to be processed twice with PDF::Create

Posted by Tom Schindl <to...@gmx.at>.
Dermot Paikkos wrote:
> Hi,
> 

[...]

> 
> These reports are actually users clock-in/clock-out times for the 
> week. So the reports would are generated weekly printed and rarely 
> used again. The issue was that if the HR person uses IE5/6 the 

Oh. I see not a very heavy loaded service :-). Are you running the 
server on win32?

> That besides, the above method didn't work (see mail to list 
> "Accessing form multiples". I think you replied to it. Whenever I 
> tried to loop through the incoming list of 
> "user=dermot+paikkos&user=joe+blogs" I was only getting the first 
> user.

Although that's perl here's the explanation.

because $r->param( "key" ) can be used in 2 contexts
(see perldoc -f wantarray). That's something very specific to
perl which you maybe not familiar with when have used langs
like java, c++, ...:

1. List-Context    => returns list of values:
---------------------------------------------
Example for list contexts:
@vals = $r->param("keys")
foreach( $r->param("keys") )
@hash_slice{$r->param("keys")}

2. Scalar-Context => returns first value of list:
-------------------------------------------------
$val = $r->param("keys")
$hash{$r->param("keys")}


> Still not sure why I can do 
> 
> my $r=Apache::Request->new(shift);
> my @users;
> foreach my $param ($r->param) {
> 	push(@users,$r->param($param));
> } 

here you are in list context.

@user{$apr->param("keys")} = ();
map { $user{$_} = 0 } keys %user;

> 
> and not:
> 
> my $r=Apache::Request->new(shift);
> my %users;
> foreach my $param ($r->parm) {
> 	%users{$r->param($param) = 0;
> }
> 

That's not valid syntax at all if you meant $user{$r->param($param)} = 0 
then you are here in scalar context.


> I am sure you code would work fine but what I have managed to do 
> seems to work in both IE and firefox and is code I can manage. If I 
> got stuck on yours I would out of my depth. So a sincere thanks but I 
> am going to stick with what I have for now.
> Thanx.
> Dp.
> 
> 

Reclaim Your Inbox!
http://www.mozilla.org/products/thunderbird

-- 
Report problems: http://perl.apache.org/bugs/
Mail list info: http://perl.apache.org/maillist/modperl.html
List etiquette: http://perl.apache.org/maillist/email-etiquette.html


Re: request appears to be processed twice with PDF::Create

Posted by Dermot Paikkos <de...@sciencephoto.com>.
Hi,

Perhaps your right but let me explain what the problem is again 
because you obviously far better at this and I am not even a good 
perl scripter so your code makes little sense and I am not sure I see 
the problem you are trying to fix.

These reports are actually users clock-in/clock-out times for the 
week. So the reports would are generated weekly printed and rarely 
used again. The issue was that if the HR person uses IE5/6 the 
request is sent twice so you end up open the users file twice and 
looping though the data twice. The only consequence of this is the 
lines on the PDF are almost in bold as each line is sent twice. What 
I was trying to do was create a hash with the user's name as a key 
($users{"Dermot Paikkos"} = 0) and then increment the value once the 
file was read so I didn't open the file a second time. 

I am not sure if the child process would really come into it but I 
bow to your expertise on that.

One of the odd things about this was that although the request could 
be read as coming in twice, if the HR person selected 4 users, you 
still get 4 pages. So no biggy there.

That besides, the above method didn't work (see mail to list 
"Accessing form multiples". I think you replied to it. Whenever I 
tried to loop through the incoming list of 
"user=dermot+paikkos&user=joe+blogs" I was only getting the first 
user.
Still not sure why I can do 

my $r=Apache::Request->new(shift);
my @users;
foreach my $param ($r->param) {
	push(@users,$r->param($param));
} 

and not:

my $r=Apache::Request->new(shift);
my %users;
foreach my $param ($r->parm) {
	%users{$r->param($param) = 0;
}

I am sure you code would work fine but what I have managed to do 
seems to work in both IE and firefox and is code I can manage. If I 
got stuck on yours I would out of my depth. So a sincere thanks but I 
am going to stick with what I have for now.
Thanx.
Dp.


On 4 Aug 2004 at 9:53, Tom Schindl wrote:

> Hi,
> 
> what you really want is an intelligent caching behaviour:
> 
> 1. Generate PDF if not exists in Filesystem (the advantage is that
>     multiple processes have a access to this PDF) or BerkleyDB for
>     example
> 2a. Sent Generatated PDF from Cache
> 2b. Regenerate PDF if underlying data has been modified and restore
> for
>      later
> [3. A cronjob which cleans up your disk-cache from time to time]
> 
> Code looks like the following(Apache2):
> 
> -------------------------8<-------------------------
> use Apache::RequestIO ();
> use Apache::RequestRec ();
> use Apache::Constants ();
> 
> sub handler {
>    my $r;
>    ## ....
>    my $cache_key = md5( $user . $path );
>    my $file = "/tmp/pdf-cache/$cache_key.pdf";
>    my $rc;
> 
>    if( -e $file && stat($_[0])[9] > stat($path)[9] ) {
>      eval {
>         $r->sendfile($file);
>      }
> 
>      if( ! $@ ) {
>         return Apache::OK;
>      }
>    }
> 
>    ## create your PDF like you've done before
>    ## ....
> 
>    open( CACHEFILE, ">$file" );
>       print CACHEFILE $content;
>    close( CACHEFILE )
> }
> -------------------------8<-------------------------
> 
> 
> Dermot Paikkos wrote:
> > Hi Tom,
> > 
> > I think I see what your saying. What if you ignored the first and
> > worked on the 2nd? Same problem I guess if you hit the first child.
> > I haven't tried to determine if there are two processes as a result
> > of hitting the form but I know from the logs that the module starts
> > cycling through the form data again.
> > 
> > It is probably very sloppy but my problem was that given a username
> > and their file, the file was being opened twice which made parsing a
> > bit tricky. All I want is to open the file once. I am open to more
> > elegant suggestions though :-) Dp.
> > 
> > 
> > On 30 Jul 2004 at 2:41, Tom Schindl wrote:
> > 
> > 
> >>Hi,
> >>
> >>Are you sure that's working as you expected? If I got you right I'm
> >>afraid I have to disappoint you but this only works if the second
> >>request is handled by the same apache-child as the first one and
> >>there's another problem. If you come back one day later and  hit the
> >>same apache-child which already processed the first request, you
> >>won't get anything.
> >>
> >>Tom
> >>
> >>Dermot Paikkos wrote:
> >>
> >>
> >>>Arnaud,
> >>>
> >>>I have found a way around this. I don't know if your interested but
> >>>it goes likes something like this:
> >>>
> >>>   foreach my $param ($r->param) {
> >>>                 if ($param =~ /\busers\b/) {
> >>>                     $users{$r->param($param)} = 0;
> >>>	}
> >>>....snip...then later
> >>>
> >>> foreach my $key (keys %users) {
> >>>		next if ($users{$key} == 1;
> >>>		$users{$key} = 1;
> >>> }
> >>>
> >>>The idea being you only work request that haven't been processed
> >>>yet. Once you process a request you increment that hash key to 1
> >>>and can avoid using it again. IE still sends the request twice and
> >>>it is working with the first request not the second. 
> >>>
> >>>Just a thought.
> >>>Dp.
> >>>
> >>>
> >>>
> >>>On 29 Jul 2004 at 16:20, Arnaud Blancher wrote:
> >>>
> >>>
> >>> 
> >>>
> >>>
> >>>>Dermot Paikkos a écrit :
> >>>>
> >>>>   
> >>>>
> >>>>
> >>>>>Does this mean you have to go an clean up these files later
> >>>>>
> >>>>>     
> >>>>>
> >>>>
> >>>>yes, if you dont want they stay on the disk.
> >>>>
> >>>>   
> >>>>
> >>>>
> >>>>>or is 
> >>>>>this done when the process ends?
> >>>>>
> >>>>>     
> >>>>>
> >>>>
> >>>>maybe you can write a special handle for the directory where you
> >>>>ll write your pdf that delete the pdf when the connection (due to
> >>>>the redirect) will be close by the client (but i'not sure).
> >>>>
> >>>>   
> >>>>
> >>>>
> >>>>>I don't want to slow the users down 
> >>>>>unless I have to. 
> >>>>>
> >>>>>I think I would like to determine the user-agent and work around
> >>>>>the repeating requests....somehow. Do you know how to find out
> >>>>>the user- agent when using Apache::Request?  I can't see it when
> >>>>>I use this object. Thanx. Dp.
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>>
> >>>>>     
> >>>>>
> >>>>
> >>>>   
> >>>>
> >>>
> >>>
> >>>~~
> >>>Dermot Paikkos * dermot@sciencephoto.com
> >>>Network Administrator @ Science Photo Library
> >>>Phone: 0207 432 1100 * Fax: 0207 286 8668
> >>>
> >>>
> >>> 
> >>>
> >>
> >>
> >>-- 
> >>Report problems: http://perl.apache.org/bugs/
> >>Mail list info: http://perl.apache.org/maillist/modperl.html
> >>List etiquette: http://perl.apache.org/maillist/email-etiquette.html
> >>
> >>
> > 
> > 
> > 
> > ~~
> > Dermot Paikkos * dermot@sciencephoto.com
> > Network Administrator @ Science Photo Library
> > Phone: 0207 432 1100 * Fax: 0207 286 8668
> > 
> > 
> 
> 
> Reclaim Your Inbox!
> http://www.mozilla.org/products/thunderbird
> 
> -- 
> Report problems: http://perl.apache.org/bugs/
> Mail list info: http://perl.apache.org/maillist/modperl.html
> List etiquette: http://perl.apache.org/maillist/email-etiquette.html
> 
> 


~~
Dermot Paikkos * dermot@sciencephoto.com
Network Administrator @ Science Photo Library
Phone: 0207 432 1100 * Fax: 0207 286 8668


-- 
Report problems: http://perl.apache.org/bugs/
Mail list info: http://perl.apache.org/maillist/modperl.html
List etiquette: http://perl.apache.org/maillist/email-etiquette.html


Re: request appears to be processed twice with PDF::Create

Posted by Tom Schindl <to...@gmx.at>.
Hi,

what you really want is an intelligent caching behaviour:

1. Generate PDF if not exists in Filesystem (the advantage is that
    multiple processes have a access to this PDF) or BerkleyDB for
    example
2a. Sent Generatated PDF from Cache
2b. Regenerate PDF if underlying data has been modified and restore for
     later
[3. A cronjob which cleans up your disk-cache from time to time]

Code looks like the following(Apache2):

-------------------------8<-------------------------
use Apache::RequestIO ();
use Apache::RequestRec ();
use Apache::Constants ();

sub handler {
   my $r;
   ## ....
   my $cache_key = md5( $user . $path );
   my $file = "/tmp/pdf-cache/$cache_key.pdf";
   my $rc;

   if( -e $file && stat($_[0])[9] > stat($path)[9] ) {
     eval {
        $r->sendfile($file);
     }

     if( ! $@ ) {
        return Apache::OK;
     }
   }

   ## create your PDF like you've done before
   ## ....

   open( CACHEFILE, ">$file" );
      print CACHEFILE $content;
   close( CACHEFILE )
}
-------------------------8<-------------------------


Dermot Paikkos wrote:
> Hi Tom,
> 
> I think I see what your saying. What if you ignored the first and 
> worked on the 2nd? Same problem I guess if you hit the first child. I 
> haven't tried to determine if there are two processes as a result of 
> hitting the form but I know from the logs that the module starts 
> cycling through the form data again.
> 
> It is probably very sloppy but my problem was that given a username 
> and their file, the file was being opened twice which made parsing a 
> bit tricky. All I want is to open the file once. I am open to more 
> elegant suggestions though :-)
> Dp.
> 
> 
> On 30 Jul 2004 at 2:41, Tom Schindl wrote:
> 
> 
>>Hi,
>>
>>Are you sure that's working as you expected? If I got you right I'm
>>afraid I have to disappoint you but this only works if the second
>>request is handled by the same apache-child as the first one and
>>there's another problem. If you come back one day later and  hit the
>>same apache-child which already processed the first request, you won't
>>get anything.
>>
>>Tom
>>
>>Dermot Paikkos wrote:
>>
>>
>>>Arnaud,
>>>
>>>I have found a way around this. I don't know if your interested but
>>>it goes likes something like this:
>>>
>>>   foreach my $param ($r->param) {
>>>                 if ($param =~ /\busers\b/) {
>>>                     $users{$r->param($param)} = 0;
>>>	}
>>>....snip...then later
>>>
>>> foreach my $key (keys %users) {
>>>		next if ($users{$key} == 1;
>>>		$users{$key} = 1;
>>> }
>>>
>>>The idea being you only work request that haven't been processed yet.
>>>Once you process a request you increment that hash key to 1 and can
>>>avoid using it again. IE still sends the request twice and it is
>>>working with the first request not the second. 
>>>
>>>Just a thought.
>>>Dp.
>>>
>>>
>>>
>>>On 29 Jul 2004 at 16:20, Arnaud Blancher wrote:
>>>
>>>
>>> 
>>>
>>>
>>>>Dermot Paikkos a écrit :
>>>>
>>>>   
>>>>
>>>>
>>>>>Does this mean you have to go an clean up these files later
>>>>>
>>>>>     
>>>>>
>>>>
>>>>yes, if you dont want they stay on the disk.
>>>>
>>>>   
>>>>
>>>>
>>>>>or is 
>>>>>this done when the process ends?
>>>>>
>>>>>     
>>>>>
>>>>
>>>>maybe you can write a special handle for the directory where you ll
>>>>write your pdf that delete the pdf when the connection (due to the
>>>>redirect) will be close by the client (but i'not sure).
>>>>
>>>>   
>>>>
>>>>
>>>>>I don't want to slow the users down 
>>>>>unless I have to. 
>>>>>
>>>>>I think I would like to determine the user-agent and work around
>>>>>the repeating requests....somehow. Do you know how to find out the
>>>>>user- agent when using Apache::Request?  I can't see it when I use
>>>>>this object. Thanx. Dp.
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>
>>>>>     
>>>>>
>>>>
>>>>   
>>>>
>>>
>>>
>>>~~
>>>Dermot Paikkos * dermot@sciencephoto.com
>>>Network Administrator @ Science Photo Library
>>>Phone: 0207 432 1100 * Fax: 0207 286 8668
>>>
>>>
>>> 
>>>
>>
>>
>>-- 
>>Report problems: http://perl.apache.org/bugs/
>>Mail list info: http://perl.apache.org/maillist/modperl.html
>>List etiquette: http://perl.apache.org/maillist/email-etiquette.html
>>
>>
> 
> 
> 
> ~~
> Dermot Paikkos * dermot@sciencephoto.com
> Network Administrator @ Science Photo Library
> Phone: 0207 432 1100 * Fax: 0207 286 8668
> 
> 


Reclaim Your Inbox!
http://www.mozilla.org/products/thunderbird

-- 
Report problems: http://perl.apache.org/bugs/
Mail list info: http://perl.apache.org/maillist/modperl.html
List etiquette: http://perl.apache.org/maillist/email-etiquette.html


Re: request appears to be processed twice with PDF::Create

Posted by Dermot Paikkos <de...@sciencephoto.com>.
Hi Tom,

I think I see what your saying. What if you ignored the first and 
worked on the 2nd? Same problem I guess if you hit the first child. I 
haven't tried to determine if there are two processes as a result of 
hitting the form but I know from the logs that the module starts 
cycling through the form data again.

It is probably very sloppy but my problem was that given a username 
and their file, the file was being opened twice which made parsing a 
bit tricky. All I want is to open the file once. I am open to more 
elegant suggestions though :-)
Dp.


On 30 Jul 2004 at 2:41, Tom Schindl wrote:

> Hi,
> 
> Are you sure that's working as you expected? If I got you right I'm
> afraid I have to disappoint you but this only works if the second
> request is handled by the same apache-child as the first one and
> there's another problem. If you come back one day later and  hit the
> same apache-child which already processed the first request, you won't
> get anything.
> 
> Tom
> 
> Dermot Paikkos wrote:
> 
> >Arnaud,
> >
> >I have found a way around this. I don't know if your interested but
> >it goes likes something like this:
> >
> >    foreach my $param ($r->param) {
> >                  if ($param =~ /\busers\b/) {
> >                      $users{$r->param($param)} = 0;
> >	}
> >....snip...then later
> >
> >  foreach my $key (keys %users) {
> >		next if ($users{$key} == 1;
> >		$users{$key} = 1;
> >  }
> >
> >The idea being you only work request that haven't been processed yet.
> > Once you process a request you increment that hash key to 1 and can
> >avoid using it again. IE still sends the request twice and it is
> >working with the first request not the second. 
> >
> >Just a thought.
> >Dp.
> >
> >
> >
> >On 29 Jul 2004 at 16:20, Arnaud Blancher wrote:
> >
> >
> >  
> >
> >>Dermot Paikkos a écrit :
> >>
> >>    
> >>
> >>>Does this mean you have to go an clean up these files later
> >>>
> >>>      
> >>>
> >>yes, if you dont want they stay on the disk.
> >>
> >>    
> >>
> >>>or is 
> >>>this done when the process ends?
> >>>
> >>>      
> >>>
> >>maybe you can write a special handle for the directory where you ll
> >>write your pdf that delete the pdf when the connection (due to the
> >>redirect) will be close by the client (but i'not sure).
> >>
> >>    
> >>
> >>>I don't want to slow the users down 
> >>>unless I have to. 
> >>>
> >>>I think I would like to determine the user-agent and work around
> >>>the repeating requests....somehow. Do you know how to find out the
> >>>user- agent when using Apache::Request?  I can't see it when I use
> >>>this object. Thanx. Dp.
> >>>
> >>>
> >>> 
> >>>
> >>>
> >>>
> >>> 
> >>>
> >>>      
> >>>
> >>
> >>    
> >>
> >
> >
> >~~
> >Dermot Paikkos * dermot@sciencephoto.com
> >Network Administrator @ Science Photo Library
> >Phone: 0207 432 1100 * Fax: 0207 286 8668
> >
> >
> >  
> >
> 
> 
> -- 
> Report problems: http://perl.apache.org/bugs/
> Mail list info: http://perl.apache.org/maillist/modperl.html
> List etiquette: http://perl.apache.org/maillist/email-etiquette.html
> 
> 


~~
Dermot Paikkos * dermot@sciencephoto.com
Network Administrator @ Science Photo Library
Phone: 0207 432 1100 * Fax: 0207 286 8668


-- 
Report problems: http://perl.apache.org/bugs/
Mail list info: http://perl.apache.org/maillist/modperl.html
List etiquette: http://perl.apache.org/maillist/email-etiquette.html


Re: request appears to be processed twice with PDF::Create

Posted by Tom Schindl <to...@gmx.at>.
Hi,

Are you sure that's working as you expected? If I got you right I'm afraid
I have to disappoint you but this only works if the second request is 
handled
by the same apache-child as the first one and there's another problem.
If you come back one day later and  hit the same apache-child which already
processed the first request, you won't get anything.

Tom

Dermot Paikkos wrote:

>Arnaud,
>
>I have found a way around this. I don't know if your interested but 
>it goes likes something like this:
>
>    foreach my $param ($r->param) {
>                  if ($param =~ /\busers\b/) {
>                      $users{$r->param($param)} = 0;
>	}
>....snip...then later
>
>  foreach my $key (keys %users) {
>		next if ($users{$key} == 1;
>		$users{$key} = 1;
>  }
>
>The idea being you only work request that haven't been processed yet. 
>Once you process a request you increment that hash key to 1 and can 
>avoid using it again. IE still sends the request twice and it is 
>working with the first request not the second. 
>
>Just a thought.
>Dp.
>
>
>
>On 29 Jul 2004 at 16:20, Arnaud Blancher wrote:
>
>
>  
>
>>Dermot Paikkos a écrit :
>>
>>    
>>
>>>Does this mean you have to go an clean up these files later
>>>
>>>      
>>>
>>yes, if you dont want they stay on the disk.
>>
>>    
>>
>>>or is 
>>>this done when the process ends?
>>>
>>>      
>>>
>>maybe you can write a special handle for the directory where you ll
>>write your pdf that delete the pdf when the connection (due to the
>>redirect) will be close by the client (but i'not sure).
>>
>>    
>>
>>>I don't want to slow the users down 
>>>unless I have to. 
>>>
>>>I think I would like to determine the user-agent and work around the
>>>repeating requests....somehow. Do you know how to find out the user-
>>>agent when using Apache::Request?  I can't see it when I use this
>>>object. Thanx. Dp.
>>>
>>>
>>> 
>>>
>>>
>>>
>>> 
>>>
>>>      
>>>
>>
>>    
>>
>
>
>~~
>Dermot Paikkos * dermot@sciencephoto.com
>Network Administrator @ Science Photo Library
>Phone: 0207 432 1100 * Fax: 0207 286 8668
>
>
>  
>


-- 
Report problems: http://perl.apache.org/bugs/
Mail list info: http://perl.apache.org/maillist/modperl.html
List etiquette: http://perl.apache.org/maillist/email-etiquette.html


Re: request appears to be processed twice with PDF::Create

Posted by Dermot Paikkos <de...@sciencephoto.com>.
Arnaud,

I have found a way around this. I don't know if your interested but 
it goes likes something like this:

    foreach my $param ($r->param) {
                  if ($param =~ /\busers\b/) {
                      $users{$r->param($param)} = 0;
	}
....snip...then later

  foreach my $key (keys %users) {
		next if ($users{$key} == 1;
		$users{$key} = 1;
  }

The idea being you only work request that haven't been processed yet. 
Once you process a request you increment that hash key to 1 and can 
avoid using it again. IE still sends the request twice and it is 
working with the first request not the second. 

Just a thought.
Dp.



On 29 Jul 2004 at 16:20, Arnaud Blancher wrote:


> Dermot Paikkos a écrit :
> 
> >Does this mean you have to go an clean up these files later
> >
> yes, if you dont want they stay on the disk.
> 
> > or is 
> >this done when the process ends?
> >
> maybe you can write a special handle for the directory where you ll
> write your pdf that delete the pdf when the connection (due to the
> redirect) will be close by the client (but i'not sure).
> 
> > I don't want to slow the users down 
> >unless I have to. 
> >
> >I think I would like to determine the user-agent and work around the
> >repeating requests....somehow. Do you know how to find out the user-
> >agent when using Apache::Request?  I can't see it when I use this
> >object. Thanx. Dp.
> >
> >
> >  
> >
> >
> >
> >  
> >
> 
> 
> 


~~
Dermot Paikkos * dermot@sciencephoto.com
Network Administrator @ Science Photo Library
Phone: 0207 432 1100 * Fax: 0207 286 8668


-- 
Report problems: http://perl.apache.org/bugs/
Mail list info: http://perl.apache.org/maillist/modperl.html
List etiquette: http://perl.apache.org/maillist/email-etiquette.html


Re: request appears to be processed twice with PDF::Create

Posted by Arnaud Blancher <Ar...@ungi.net>.
Dermot Paikkos a écrit :

>Does this mean you have to go an clean up these files later
>
yes, if you dont want they stay on the disk.

> or is 
>this done when the process ends?
>
maybe you can write a special handle for the directory where you ll 
write your pdf
that delete the pdf when the connection (due to the redirect) will be 
close by the client (but i'not sure).

> I don't want to slow the users down 
>unless I have to. 
>
>I think I would like to determine the user-agent and work around the 
>repeating requests....somehow. Do you know how to find out the user-
>agent when using Apache::Request?  I can't see it when I use this 
>object.
>Thanx.
>Dp.
>
>
>  
>
>
>
>  
>



-- 
Report problems: http://perl.apache.org/bugs/
Mail list info: http://perl.apache.org/maillist/modperl.html
List etiquette: http://perl.apache.org/maillist/email-etiquette.html


Re: request appears to be processed twice with PDF::Create

Posted by Dermot Paikkos <de...@sciencephoto.com>.
Does this mean you have to go an clean up these files later or is 
this done when the process ends? I don't want to slow the users down 
unless I have to. 

I think I would like to determine the user-agent and work around the 
repeating requests....somehow. Do you know how to find out the user-
agent when using Apache::Request?  I can't see it when I use this 
object.
Thanx.
Dp.



On 29 Jul 2004 at 15:08, Arnaud Blancher wrote:

> Dermot Paikkos a écrit :
> 
> >Arnaud,
> >
> >Yes I am/was using I.E.6 I have installed Firefox 0.9 and it works
> >fine.
> >
> >A bit frustrating as I know that most of the users will be using I.E
> >5/6. I suppose I am not going to have to check the incoming request's
> > browser type and tweak the functions to suit each.
> >
> >  
> >
> 
> I have this problem which Cocon an FOP !
> It's seem to be a problem whith the pluging in Internet Exploreur who
> call the program twice ! (first is not finish, when the second came,
> which a 304 status, so you can't see your pdf, isn't it ?)
> 
> On solution is to create your pdf in the hard disk and send a redirect
> to the client for the static pdf. It's  work fine, but slowly !
> 
> Arnaud.
> 
> >  
> >
> >
> >  
> >
> 
> 
> 


~~
Dermot Paikkos * dermot@sciencephoto.com
Network Administrator @ Science Photo Library
Phone: 0207 432 1100 * Fax: 0207 286 8668


-- 
Report problems: http://perl.apache.org/bugs/
Mail list info: http://perl.apache.org/maillist/modperl.html
List etiquette: http://perl.apache.org/maillist/email-etiquette.html


Re: request appears to be processed twice with PDF::Create

Posted by Arnaud Blancher <Ar...@ungi.net>.
Dermot Paikkos a écrit :

>Arnaud,
>
>Yes I am/was using I.E.6 I have installed Firefox 0.9 and it works 
>fine.
>
>A bit frustrating as I know that most of the users will be using I.E 
>5/6. I suppose I am not going to have to check the incoming request's 
>browser type and tweak the functions to suit each.
>
>  
>

I have this problem which Cocon an FOP !
It's seem to be a problem whith the pluging in Internet Exploreur
who call the program twice !
(first is not finish, when the second came, which a 304 status, so you 
can't see your pdf, isn't it ?)

On solution is to create your pdf in the hard disk and send a redirect 
to the client for the static pdf.
It's  work fine, but slowly !

Arnaud.

>  
>
>
>  
>



-- 
Report problems: http://perl.apache.org/bugs/
Mail list info: http://perl.apache.org/maillist/modperl.html
List etiquette: http://perl.apache.org/maillist/email-etiquette.html


Re: Should JMeter 2.0 require JDK 1.4?

Posted by ms...@apache.org.
Ya, same way regex patterns are cached,

JMeter is way cool!  

On 27 Aug 2003 at 18:26, Jordi Salvat i Alabart wrote:

> 
> 
> mstover1@apache.org wrote:
> > B)we're caching the results
> 
> Are we? JMeter rocks, does it?
> 
> :-)
> 
> -- 
> Salut,
> 
> Jordi.
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: jmeter-dev-unsubscribe@jakarta.apache.org
> For additional commands, e-mail: jmeter-dev-help@jakarta.apache.org
> 




--
Michael Stover
mstover1@apache.org
Yahoo IM: mstover_ya
ICQ: 152975688
AIM: mstover777

Re: Should JMeter 2.0 require JDK 1.4?

Posted by Jordi Salvat i Alabart <js...@atg.com>.

mstover1@apache.org wrote:
> B)we're caching the results

Are we? JMeter rocks, does it?

:-)

-- 
Salut,

Jordi.


Re: WebServer.java

Posted by Daniel Rall <dl...@finemaltcoding.com>.
"Rob Walker" <ro...@softsell.com> writes:

> > I'd rather have the stability of a full-blown, maintained servlet
> > engine; XML-RPC's spec makes perfomant implementations difficult,
> > anyhow.  We avoid doing performance-critical operations over XML-RPC. In
> > the future, we will be using more and more Spread
> > <http://www.spread.org/>, perhaps eventually via JMS
> > <http://jms4spread.tigris.org/>.  We may install the XmlRpcServer as the
> > RPC layer to use over Spread or JMS. -- 
> 
> Interesting - thanks for the links. I'll take a look
> 
> Performance wise, maybe our app is peculiar, but for us XML/RPC 
> absolutely flies. We use it for a variety of things, but one aspect is remote 
> log and stats monitoring - which generates a very high rate/volume, of short 
> to medium size (http payload wise) xml/rpc calls. Once we got keep alive 
> figured, xml/rpc beat the daylights out of most other things we tried - direct 
> socket level, binary messaging was about the only thing to beat it, and then 
> not by enough to make the loss of flexibility worthwhile.

That's actually why I like message oriented middleware (MOM) so much.
A good implementation (such as Spread) provides a very thin layer
above TCP multicast sockets, yielding an extremely efficient
implementation.  JMS provides an excellent Java API abstraction which
works across MOM vendors.

I like Spread as my MOM provider do not only to its implementation
(which avoids a single point of failure), but also to the fact that
there are client language bindings for Java, Python, Perl, C (and I
think, Ruby).

Now, if you need RPC, you have to build something on top of the MOM.
Which is where XML-RPC could come in (let's not invent yet another
custom wire protocol, UGH).
-- 

Daniel Rall <dl...@finemaltcoding.com>

Re: svn copy breaks on svn update

Posted by Jan Hendrik <li...@gmail.com>.
Concerning Re: svn copy breaks on svn update
Stefan Sperling wrote on 27 Aug 2009, 15:15, at least in part:

> On Thu, Aug 27, 2009 at 01:35:35PM +0200, Jan Hendrik wrote:
> > > Can you provide a script that starts with an empty repository and
> > > ends with this error?
> > 
> > Well, here is a script which should be close enough, except that it
> > does not produce yesterday's experience.  IOW with this things work
> > as one would expect.  Besides the test file "foo.php" attached.
> 
> I don't understand. The script does not reproduce the problem at all?
> What do you want people to do with foo.php?

I don't understand either what is going on.  Definitely I cannot 
update any working copy anymore without breaking it the moment 
update hits that "common" folder.  Yet I can't reproduce it either 
with the script.

As far as I can tell the script reproduces what the user did, with 
respect to what he had to do, what he says he did, and what the 
log tells:

svn cp several files to the "common" folder, renaming them on the 
way (foo.php provided as placeholder for those files; I don't know 
how to make and edit files by plain batch file in a use case as 
here);
edited both the source files and the target files;
committed everything successfully in rev. xxxx.

But updates on other working copies break on those files he had 
copied to the "common" folder with the error messages quoted in 
my first posting.  On *any* working copy, so it's not that just by 
coincidence one working copy got corrupted (as said to be 
expected and tolerated by design).

What's left behind in "common" are

bar1.inc.rxxxx (size as bar1.inc committed by user per rev. xxxx)
bar1.inc.copied (size as was when originally copied to "common")
bar1.inc.mine (size 0)

bar2.inc.rxxx (size 0)
bar2.inc.copied (size 0)
bar2.inc.mine (size 0)

This is on working copies foo never was copied to bar, which 
should just add bar by way of svn update!

As things like that usually hit at the worst moment we can't fool 
around endlessly.  So I manually copied the changes into the other 
users' working copies, and when they tell me they are ready to 
commit I copy their changes back into the original user's working 
copy and commit (as this is the only working copy knowing about 
those files copied to "common").  In our small team this has to 
work for the next few days.  However, I suppose that we'll find that 
the repository has been rendered unusable and therefore lost with 
all history.

JH
---------------------------------------
Freedom quote:

     If some among you fear taking a stand
     because you are afraid of reprisals
     from customers, clients, or even government,
     recognize that you are just feeding the crocodile
     hoping he'll eat you last.
               -- Ronald Reagan

------------------------------------------------------
http://subversion.tigris.org/ds/viewMessage.do?dsForumId=1065&dsMessageId=2387960

To unsubscribe from this discussion, e-mail: [users-unsubscribe@subversion.tigris.org].

Re: svn copy breaks on svn update

Posted by Stefan Sperling <st...@elego.de>.
On Thu, Aug 27, 2009 at 01:35:35PM +0200, Jan Hendrik wrote:
> > Can you provide a script that starts with an empty repository
> > and ends with this error?
> 
> Well, here is a script which should be close enough, except that it 
> does not produce yesterday's experience.  IOW with this things 
> work as one would expect.  Besides the test file "foo.php" attached.

I don't understand. The script does not reproduce the problem at all?
What do you want people to do with foo.php?

Stefan

------------------------------------------------------
http://subversion.tigris.org/ds/viewMessage.do?dsForumId=1065&dsMessageId=2387896

To unsubscribe from this discussion, e-mail: [users-unsubscribe@subversion.tigris.org].

Re: use-commit-times = effectless?

Posted by Jan Hendrik <ja...@bigfoot.com>.
> Note that deltification will be automatic again in 0.35 and after. You
> can remove it from your post-commit hooks as soon as you start running
> 0.35 or above.

Karl,
that's great news! And good news! Though I assume I would get it 
working sometime ... <g>

Best regards

Jan Hendrik

---------------------------------------
Freedom quote:

     A wise and frugal government, which shall restrain men from
     injuring one another, which shall leave them otherwise free to
     regulate their own pursuits of industry and improvement, and shall
     not take from the mouth of labor the bread it has earned. This is
     the sum of good government, and all that is necessary to close the
     circle of our felicities.
          -- Thomas Jefferson, in his 1801 inaugural address


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: use-commit-times = effectless?

Posted by kf...@collab.net.
Note that deltification will be automatic again in 0.35 and after.
You can remove it from your post-commit hooks as soon as you start
running 0.35 or above.

There will be a more prominent announcement about this later.

-Karl


"Jan Hendrik" <ja...@bigfoot.com> writes:
> Concerning RE: use-commit-times = effectless?
> Carsten Schurig wrote on 5 Dec 2003, 12:25, at least in part:
> 
> > my post-commit.bat looks like:
> > 
> > start /MIN /LOW C:\Programme\Subversion\bin\svnadmin.exe deltify -r %2
> > %1
> > 
> > With this the deltification should work. The problem I have is, that I
> > actually don't know if it works, though.
> 
> Not having tested this so far here I have an idea that may serve as 
> at least indirect proof for a successful execution of deltify:
> 
> Assuming that svnadmin deltify issues error codes - developers 
> requested here! else trial&error should help, too - you may 
> intercept these and only in case of success execute another 
> command: svnadmin list-unused-dblogs REPOS_PATH | xargs rm.
> 
> So if logfiles keep accumulating this would hint to deltafication not 
> being executed. Can't say if mailer.py could be utilized for control 
> of success, too. When some day repositories may become more 
> stable here and I find some leisure time I might get into this. So far 
> I leave logfiles untouched here, cannot forget that my first repos 
> corruptions just happened after removing unused logs.
> 
> Best regards
> 
> Jan Hendrik
> 
> ---------------------------------------
> Freedom quote:
> 
>      Welfare's purpose should be to eliminate,
>      as far as possible, the need for its own existence.
>                 -- Ronald Reagan, Los Angeles Times, January 7, 1970
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
> For additional commands, e-mail: users-help@subversion.tigris.org

---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: use-commit-times = effectless?

Posted by Jan Hendrik Niemeyer <jh...@marine-niemeyer.com>.
Concerning Re: use-commit-times = effectless?
Dominic Anello wrote on 12 Dec 2003, 16:01, at least in part:

Hi Dominic!

> I can vouch for the start syntax working.  I've been using it for a
> couple of weeks to do both incremental dumps and deltification for our
> repository.  The dumps are definitely getting created just fine, and I
> haven't noticed excessive database growth, so I assume deltification
> is working.

Thanks a lot for sharing your elaborate batch file! I was just copying 
Carsten's command into the post-commit template to have it 
available for later application when your mail came in. It will also 
solve the backup problem I still had. This was not very pressuring 
here as we are less interested in history but concurrent working 
and the content of the working copies is backed up anyway 
(without .svn folders), so it would always be possible to build a new 
repos in the worst case. However, getting incremental dumps into 
backups was something on my todo list.

I think for those of us not (too) familiar with *nix and its shell scripts 
your batch offers a great starting point for Windows users quite 
comparable to the *nix templates. I would suggest to include it 
either as a [Windows] section in current templates or as separate 
template for Windows hook files (e.g. post-commit.bat.tmpl)

> The start command always seems to return a zero error
> code, so you can't really check the result of the svnadmin call within
> the post-commit script.  I suppose you couold write a wrapper batch
> file that post-commit calls that will log the error code somewhere.

That should be a post-commit with the line start [/MIN /B /LOW 
etc.] wrapper.bat where wrapper.bat has the svnadmin commands 
without "start", but the errorlevel stuff. However, as your script 
writes a log file it might not be that necessary at all. I think I try 
this nevertheless when I get my big HDD back so I can do the 
trial&error locally.

Have a fine weekend!

Jan Hendrik

---------------------------------------
Freedom quote:

     The power which a multiple millionaire,
     who may be my neighbor and perhaps my employer has over me
     is far less than that which the smallest functionary
     who wields the coercive power of the state,
     and on whose discretion it depends
     whether and how I am allowed to live or to work.
                -- F. A. Hayek


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: use-commit-times = effectless?

Posted by Jan Hendrik Niemeyer <jh...@marine-niemeyer.com>.
Since the system seems to have gobbled up my original reply 
yesterday I train my fingers once more ...


Hi Dominic!

> I can vouch for the start syntax working.  I've been using it for a
> couple of weeks to do both incremental dumps and deltification for our
> repository.  The dumps are definitely getting created just fine, and I
> haven't noticed excessive database growth, so I assume deltification
> is working.

Thanks a lot for sharing your elaborate batch file! I just copied 
Carsten's line into the post-commit template to have it readily 
available when your mail came in. With the incremental dump it 
should also help me in getting a backup of the repos. Since we are 
less interested in history but just concurrent working and working 
copies are backed up anyway this was not a very pressuring issue, 
but something on my todo list. Restoring a repos from a backup 
will be much easier than creating a new one from all the working 
copies!

I think your script is a very good starting point for all of us being not 
(too) familiar with *nix shell scripts to get into the hooks thing 
under Windows. I would suggest to include it in the hooks 
templates, either as [Windows] section or as a separate template 
(e.g. post-commit.bat.tmpl).

> The start command always seems to return a zero error
> code, so you can't really check the result of the svnadmin call within
> the post-commit script.  I suppose you couold write a wrapper batch
> file that post-commit calls that will log the error code somewhere.

That should be "start [/B /MIN /LOW etc.] wrapper.bat in post-
commit.bat where wrapper.bat has the respective svnadmin 
commands w/o "start", but the errorlevel stuff. As you log things 
this might not be necessary however.

Have a fine Sunday!

Jan Hendrik
---------------------------------------
Freedom quote:

     The only justifiable purpose of political institutions
     is to assure the unhindered development of the individual.
                -- Albert Einstein


---------------------------------------------------------------------
To unsubscribe, e-mail: users-unsubscribe@subversion.tigris.org
For additional commands, e-mail: users-help@subversion.tigris.org

Re: use-commit-times = effectless?

Posted by Jan Hendrik Niemeyer <jh...@marine-niemeyer.com>.
Concerning Re: use-commit-times = effectless?
Dominic Anello wrote on 12 Dec 2003, 16:01, at least in part:

[text is snipped]

> I can vouch for the start syntax working.  I've been using it for a
> couple of weeks to do both incremental dumps and deltification for our
> repository.  The dumps are definitely getting created just fine, and I
> haven't noticed excessive database growth, so I assume deltification
> is working.  The start command always seems to return a zero error
> code, so you can't really check the result of the svnadmin call within
> the post-commit script.  I suppose you couold write a wrapper batch
> file that post-commit calls that will log the error code somewhere.
> 
> I've attached my post-commit script if it will help.

I have played around a little and finally got the wanted 
confirmation.Twice to be exact. Appending ">> %err%" to the 
deltafication command redirects screen output of svnadmin deltify 
to the error log you already had defined. With %log% it worked, 
too, but the rest of the script is no longer written to post-
commit.log. Therefore the second log file is needed.

But moving the echo lines behind the actual command and 
separate them by "&&" will write them only in case the command 
was executed successfully. So the logfile should now say what 
really happened, not what commands were executed.

The block for removing the logfiles seems to work, however, my 
playground was too small to test this out. See post-commit.bat 
included once more.

Thanks again for getting me started!

Jan Hendrik
---------------------------------------
Freedom quote:

     There is no worse tyranny than to force a man to pay for
     what he does not want merely because you think it would be good for him.
               -- Robert Heinlein



Re: use-commit-times = effectless?

Posted by Dominic Anello <da...@danky.com>.
On 2003-12-12 11:20:38 +0100, Jan Hendrik wrote:
----8<---- 
> Not having tested this so far here I have an idea that may serve as 
> at least indirect proof for a successful execution of deltify:
> 
> Assuming that svnadmin deltify issues error codes - developers 
> requested here! else trial&error should help, too - you may 
> intercept these and only in case of success execute another 
> command: svnadmin list-unused-dblogs REPOS_PATH | xargs rm.
> 
> So if logfiles keep accumulating this would hint to deltafication not 
> being executed. Can't say if mailer.py could be utilized for control 
> of success, too. When some day repositories may become more 
> stable here and I find some leisure time I might get into this. So far 
> I leave logfiles untouched here, cannot forget that my first repos 
> corruptions just happened after removing unused logs.
> 
> Best regards
> 
> Jan Hendrik

I can vouch for the start syntax working.  I've been using it for a
couple of weeks to do both incremental dumps and deltification for our
repository.  The dumps are definitely getting created just fine, and I
haven't noticed excessive database growth, so I assume deltification is
working.  The start command always seems to return a zero error code, so
you can't really check the result of the svnadmin call within the
post-commit script.  I suppose you couold write a wrapper batch file
that post-commit calls that will log the error code somewhere.

I've attached my post-commit script if it will help.

-Dominic

Re: Translators needed for Wizard Templates Titles

Posted by Jürgen Schmidt <jo...@gmail.com>.
On 5/16/13 3:03 PM, Ariel Constenla-Haile wrote:
> On Fri, Sep 28, 2012 at 02:47:11AM -0300, Ariel Constenla-Haile
> wrote:
>> On Sat, Aug 25, 2012 at 07:02:18PM -0300, Ariel Constenla-Haile
>> wrote:
>>> Hello *,
>>> 
>>> Bug 110378 https://issues.apache.org/ooo/show_bug.cgi?id=110378
>>> found a total of 292 templates without localized Title, in 11
>>> Languages (es, eu, fr, it, ja, ko, pt-BR, sk, sv, zh-CN,
>>> zh-TW).
>>> 
>>> The affected languages are:
>>> 
>>> - Currently supported languages:
>>> 
>>> * es, Spanish * fr, French * it, Italian * ja, Japanese *
>>> pt-BR, Portugese (Brazil) * sk, Slovak * zh-CN, Chinese
>>> (simplified) * zh-TW, Chinese (traditional)
>>> 
>>> - Currently unsupported languages:
>>> 
>>> * eu, Basque * ko, Korean * sv, Swedish
>> 
>> Translations are still needed for
>> 
>> - Chinese (traditional) - Portugese (Brazil)
>> 
>> The translation is rather simple, almost 20/30 words, of kind
>> "Orange", "Classic", "Modern".
> 
> Just a reminder that (after more than 8 months) translations are
> still missing.

it seems that I have missed this thread, sorry for the confusion and
my new thread on this

Juergen

> 
> 
> Regards
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@openoffice.apache.org
For additional commands, e-mail: dev-help@openoffice.apache.org


Re: Bugzilla projects setup...

Posted by Joshua Slive <jo...@slive.ca>.
William A. Rowe, Jr. wrote:

> 
> Isn't "Apache" implicit?  I suspect "HTTP Server" is the easiest entry
> (or HTTP 1.3 Server/HTTP 2.0 Server).  httpd implies you are running
> on a unix-derivate, Win32 or Netware users might be somewhat lost.
> 
> otoh, HTTP Server could be misinterpreted as the standalone Jakarta
> which would not be pretty.  Perhaps Pier was right, Apache itself says
> a ton.  If we populate a component list under Apache that tells the
> rest of the story, nobody would be confused.

Right.  The name definitely must have Apache, since everyone outside the 
ASF (and many inside it) refer to it that way.  "Apache HTTP" would be 
fine with me.

> 
> What about core, the whole list worth of modules, apr and apr-util,
> pcre and xml as Bugzilla "Components"?

Yes, we need the component list filled in.  At least the "categories" 
from the old gnats database would be a start.

The other thing that needs to be done "soon" is to modify the default 
page under http://bugs.apache.org/ to at least reference the other 
database.  Otherwise, both databases will continue to get reports.

Joshua.


Re: User meetup in New York on October 28

Posted by Jarek Jarcec Cecho <ja...@apache.org>.
Thank you very much to our speakers, Venkat, Abe and Masatake for giving great talks on our last User meetup. I hope that everyone had a great time! The slide decks are available on our wiki [1].

Jarcec

Links:
1: https://cwiki.apache.org/confluence/display/SQOOP/Home

On Wed, Oct 30, 2013 at 11:13:47PM -0700, Jarek Jarcec Cecho wrote:
> Hi Esin,
> due to webex licensing, I couldn't share the webex numbers on a public mailing list. Instead we had an RVSP process in place, so that we can share the webex meeting details only with RVSPed people.
> 
> Jarcec
> 
> On Mon, Oct 28, 2013 at 06:36:14PM -0400, Kiris, Esin wrote:
> > Hi Jarcec,
> > I would like to dial in. What is the webex and dial-in info?
> > 
> > Thanks,
> > 
> > Esin Kiris
> > 
> > -----Original Message-----
> > From: Jarek Jarcec Cecho [mailto:jarcec@apache.org]
> > Sent: Monday, October 28, 2013 3:00 PM
> > To: user@sqoop.apache.org; dev@sqoop.apache.org
> > Subject: Re: User meetup in New York on October 28
> > 
> > The meetup is on fourth floor in a meeting room called Hudson. I would suggest to take the elevators from the lobby directly to the fourth floor and then the room are just next to the elevators.
> > 
> > Jarcec
> > 
> > On Fri, Oct 25, 2013 at 04:52:28PM -0700, Jarek Jarcec Cecho wrote:
> > > Just a reminding that the New York Sqoop meetup is next Monday! See you all there!
> > >
> > > Jarcec
> > >
> > > P.S. I've sent a webex invite. If you are planning to join remotely and you did not get an invite, please do let me know.
> > >
> > > On Thu, Oct 03, 2013 at 09:39:07AM -0700, Jarek Jarcec Cecho wrote:
> > > > Hi Sqoop users and developers,
> > > > I would like to invite you all to Sqoop user meetup that will be happening on October 28. Come and join other Sqoop users and developers discussing various topics from Sqoop's past, present and future!
> > > >
> > > > We've prepared following interesting talks for the meetup:
> > > >
> > > > * HCatalog integration in Sqoop by Venkat Ranganathan
> > > > * Hue application for Sqoop2 by Abraham Elmahrek
> > > > * Complex stories about Sqooping PostgreSQL data by Masatake Iwasaki
> > > >
> > > > Don't forget to RVSP on our meetup page:
> > > >
> > > > http://www.meetup.com/Sqoop-User-Meetup/events/129236352/
> > > >
> > > > Mark your calendar with the following info:
> > > >
> > > > * Date: 2013-10-28
> > > > * Hour: 18:30
> > > > * Place: Hilton Hotel, 1335 Avenue of the Americas New York, New York, 10019
> > > > * Room: Hudson Suite
> > > >
> > > > If you are not going to be in New York city on 28th, but you would still like to participate, shoot me an email! We will be having webex session for the meetup, so we will be able to dial you in! You won't be able to enjoy the food and beverages though.
> > > >
> > > > Jarcec
> > >
> > >
> > 
> > 
> > 
> > 
> > 
> > ATTENTION: -----
> > 
> > The information contained in this message (including any files transmitted with this message) may contain proprietary, trade secret or other  confidential and/or legally privileged information. Any pricing information contained in this message or in any files transmitted with this message is always confidential and cannot be shared with any third parties without prior written approval from Syncsort. This message is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any use, disclosure, copying or distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or Syncsort and destroy all copies of this message in your possession, custody or control.



Re: User meetup in New York on October 28

Posted by Jarek Jarcec Cecho <ja...@apache.org>.
Thank you very much to our speakers, Venkat, Abe and Masatake for giving great talks on our last User meetup. I hope that everyone had a great time! The slide decks are available on our wiki [1].

Jarcec

Links:
1: https://cwiki.apache.org/confluence/display/SQOOP/Home

On Wed, Oct 30, 2013 at 11:13:47PM -0700, Jarek Jarcec Cecho wrote:
> Hi Esin,
> due to webex licensing, I couldn't share the webex numbers on a public mailing list. Instead we had an RVSP process in place, so that we can share the webex meeting details only with RVSPed people.
> 
> Jarcec
> 
> On Mon, Oct 28, 2013 at 06:36:14PM -0400, Kiris, Esin wrote:
> > Hi Jarcec,
> > I would like to dial in. What is the webex and dial-in info?
> > 
> > Thanks,
> > 
> > Esin Kiris
> > 
> > -----Original Message-----
> > From: Jarek Jarcec Cecho [mailto:jarcec@apache.org]
> > Sent: Monday, October 28, 2013 3:00 PM
> > To: user@sqoop.apache.org; dev@sqoop.apache.org
> > Subject: Re: User meetup in New York on October 28
> > 
> > The meetup is on fourth floor in a meeting room called Hudson. I would suggest to take the elevators from the lobby directly to the fourth floor and then the room are just next to the elevators.
> > 
> > Jarcec
> > 
> > On Fri, Oct 25, 2013 at 04:52:28PM -0700, Jarek Jarcec Cecho wrote:
> > > Just a reminding that the New York Sqoop meetup is next Monday! See you all there!
> > >
> > > Jarcec
> > >
> > > P.S. I've sent a webex invite. If you are planning to join remotely and you did not get an invite, please do let me know.
> > >
> > > On Thu, Oct 03, 2013 at 09:39:07AM -0700, Jarek Jarcec Cecho wrote:
> > > > Hi Sqoop users and developers,
> > > > I would like to invite you all to Sqoop user meetup that will be happening on October 28. Come and join other Sqoop users and developers discussing various topics from Sqoop's past, present and future!
> > > >
> > > > We've prepared following interesting talks for the meetup:
> > > >
> > > > * HCatalog integration in Sqoop by Venkat Ranganathan
> > > > * Hue application for Sqoop2 by Abraham Elmahrek
> > > > * Complex stories about Sqooping PostgreSQL data by Masatake Iwasaki
> > > >
> > > > Don't forget to RVSP on our meetup page:
> > > >
> > > > http://www.meetup.com/Sqoop-User-Meetup/events/129236352/
> > > >
> > > > Mark your calendar with the following info:
> > > >
> > > > * Date: 2013-10-28
> > > > * Hour: 18:30
> > > > * Place: Hilton Hotel, 1335 Avenue of the Americas New York, New York, 10019
> > > > * Room: Hudson Suite
> > > >
> > > > If you are not going to be in New York city on 28th, but you would still like to participate, shoot me an email! We will be having webex session for the meetup, so we will be able to dial you in! You won't be able to enjoy the food and beverages though.
> > > >
> > > > Jarcec
> > >
> > >
> > 
> > 
> > 
> > 
> > 
> > ATTENTION: -----
> > 
> > The information contained in this message (including any files transmitted with this message) may contain proprietary, trade secret or other  confidential and/or legally privileged information. Any pricing information contained in this message or in any files transmitted with this message is always confidential and cannot be shared with any third parties without prior written approval from Syncsort. This message is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any use, disclosure, copying or distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or Syncsort and destroy all copies of this message in your possession, custody or control.



Re: User meetup in New York on October 28

Posted by Jarek Jarcec Cecho <ja...@apache.org>.
Hi Esin,
due to webex licensing, I couldn't share the webex numbers on a public mailing list. Instead we had an RVSP process in place, so that we can share the webex meeting details only with RVSPed people.

Jarcec

On Mon, Oct 28, 2013 at 06:36:14PM -0400, Kiris, Esin wrote:
> Hi Jarcec,
> I would like to dial in. What is the webex and dial-in info?
> 
> Thanks,
> 
> Esin Kiris
> 
> -----Original Message-----
> From: Jarek Jarcec Cecho [mailto:jarcec@apache.org]
> Sent: Monday, October 28, 2013 3:00 PM
> To: user@sqoop.apache.org; dev@sqoop.apache.org
> Subject: Re: User meetup in New York on October 28
> 
> The meetup is on fourth floor in a meeting room called Hudson. I would suggest to take the elevators from the lobby directly to the fourth floor and then the room are just next to the elevators.
> 
> Jarcec
> 
> On Fri, Oct 25, 2013 at 04:52:28PM -0700, Jarek Jarcec Cecho wrote:
> > Just a reminding that the New York Sqoop meetup is next Monday! See you all there!
> >
> > Jarcec
> >
> > P.S. I've sent a webex invite. If you are planning to join remotely and you did not get an invite, please do let me know.
> >
> > On Thu, Oct 03, 2013 at 09:39:07AM -0700, Jarek Jarcec Cecho wrote:
> > > Hi Sqoop users and developers,
> > > I would like to invite you all to Sqoop user meetup that will be happening on October 28. Come and join other Sqoop users and developers discussing various topics from Sqoop's past, present and future!
> > >
> > > We've prepared following interesting talks for the meetup:
> > >
> > > * HCatalog integration in Sqoop by Venkat Ranganathan
> > > * Hue application for Sqoop2 by Abraham Elmahrek
> > > * Complex stories about Sqooping PostgreSQL data by Masatake Iwasaki
> > >
> > > Don't forget to RVSP on our meetup page:
> > >
> > > http://www.meetup.com/Sqoop-User-Meetup/events/129236352/
> > >
> > > Mark your calendar with the following info:
> > >
> > > * Date: 2013-10-28
> > > * Hour: 18:30
> > > * Place: Hilton Hotel, 1335 Avenue of the Americas New York, New York, 10019
> > > * Room: Hudson Suite
> > >
> > > If you are not going to be in New York city on 28th, but you would still like to participate, shoot me an email! We will be having webex session for the meetup, so we will be able to dial you in! You won't be able to enjoy the food and beverages though.
> > >
> > > Jarcec
> >
> >
> 
> 
> 
> 
> 
> ATTENTION: -----
> 
> The information contained in this message (including any files transmitted with this message) may contain proprietary, trade secret or other  confidential and/or legally privileged information. Any pricing information contained in this message or in any files transmitted with this message is always confidential and cannot be shared with any third parties without prior written approval from Syncsort. This message is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any use, disclosure, copying or distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or Syncsort and destroy all copies of this message in your possession, custody or control.

RE: User meetup in New York on October 28

Posted by "Kiris, Esin" <ek...@syncsort.com>.
Hi Jarcec,
I would like to dial in. What is the webex and dial-in info?

Thanks,

Esin Kiris

-----Original Message-----
From: Jarek Jarcec Cecho [mailto:jarcec@apache.org]
Sent: Monday, October 28, 2013 3:00 PM
To: user@sqoop.apache.org; dev@sqoop.apache.org
Subject: Re: User meetup in New York on October 28

The meetup is on fourth floor in a meeting room called Hudson. I would suggest to take the elevators from the lobby directly to the fourth floor and then the room are just next to the elevators.

Jarcec

On Fri, Oct 25, 2013 at 04:52:28PM -0700, Jarek Jarcec Cecho wrote:
> Just a reminding that the New York Sqoop meetup is next Monday! See you all there!
>
> Jarcec
>
> P.S. I've sent a webex invite. If you are planning to join remotely and you did not get an invite, please do let me know.
>
> On Thu, Oct 03, 2013 at 09:39:07AM -0700, Jarek Jarcec Cecho wrote:
> > Hi Sqoop users and developers,
> > I would like to invite you all to Sqoop user meetup that will be happening on October 28. Come and join other Sqoop users and developers discussing various topics from Sqoop's past, present and future!
> >
> > We've prepared following interesting talks for the meetup:
> >
> > * HCatalog integration in Sqoop by Venkat Ranganathan
> > * Hue application for Sqoop2 by Abraham Elmahrek
> > * Complex stories about Sqooping PostgreSQL data by Masatake Iwasaki
> >
> > Don't forget to RVSP on our meetup page:
> >
> > http://www.meetup.com/Sqoop-User-Meetup/events/129236352/
> >
> > Mark your calendar with the following info:
> >
> > * Date: 2013-10-28
> > * Hour: 18:30
> > * Place: Hilton Hotel, 1335 Avenue of the Americas New York, New York, 10019
> > * Room: Hudson Suite
> >
> > If you are not going to be in New York city on 28th, but you would still like to participate, shoot me an email! We will be having webex session for the meetup, so we will be able to dial you in! You won't be able to enjoy the food and beverages though.
> >
> > Jarcec
>
>





ATTENTION: -----

The information contained in this message (including any files transmitted with this message) may contain proprietary, trade secret or other  confidential and/or legally privileged information. Any pricing information contained in this message or in any files transmitted with this message is always confidential and cannot be shared with any third parties without prior written approval from Syncsort. This message is intended to be read only by the individual or entity to whom it is addressed or by their designee. If the reader of this message is not the intended recipient, you are on notice that any use, disclosure, copying or distribution of this message, in any form, is strictly prohibited. If you have received this message in error, please immediately notify the sender and/or Syncsort and destroy all copies of this message in your possession, custody or control.

Re: smil animation broken after first run

Posted by massimo citterio <ci...@sinapto.net>.
further investigations make me think that is not possible to remove an
animation, or the parent of the animation.
eg: if an animation moves a rect, neither the rect nor the animation can
be removed

If I try to do this, I have those tick error.
A workaround I've found is never remove an animation, or the parent, and
always reuse them, and hide the parent when it's not needed.

Did anyone succeed in adding and removing animation?
Is this weird thing happening to me only?

I am trying to create a small application that can reproduce the
problem, as soon as it's done, I will post it

Massimo


---------------------------------------------------------------------
To unsubscribe, e-mail: batik-users-unsubscribe@xmlgraphics.apache.org
For additional commands, e-mail: batik-users-help@xmlgraphics.apache.org


Re: OOM on CompressionMetadata.readChunkOffsets(..)

Posted by Sylvain Lebresne <sy...@datastax.com>.
On Mon, Oct 31, 2011 at 2:58 PM, Sylvain Lebresne <sy...@datastax.com> wrote:
> On Mon, Oct 31, 2011 at 1:10 PM, Mick Semb Wever <mc...@apache.org> wrote:
>> On Mon, 2011-10-31 at 13:05 +0100, Mick Semb Wever wrote:
>>> Given a 60G sstable, even with 64kb chunk_length, to read just that one
>>> sstable requires close to 8G free heap memory...
>>
>> Arg, that calculation was a little off...
>>  (a long isn't exactly 8K...)
>>
>> But you get my concern...
>
> Well, with a long being only 8 bytes, that's 8MB of free heap memory. Without
> being negligible, that's not completely crazy to me.
>
> No, the problem is that we create those 8MB for each reads, which *is* crazy
> (the fact that we allocate those 8MB in one block is not very nice for
> the GC either
> but that's another problem).
> Anyway, that's really a bug and I've created CASSANDRA-3427 to fix.

Note that it's only a problem for range queries.

--
Sylvain

>
> --
> Sylvain
>
>>
>> ~mck
>>
>> --
>> "When you say: "I wrote a program that crashed Windows", people just
>> stare at you blankly and say: "Hey, I got those with the system -- for
>> free."" Linus Torvalds
>>
>> | http://semb.wever.org | http://sesat.no |
>> | http://tech.finn.no   | Java XSS Filter |
>>
>

Re: OOM on CompressionMetadata.readChunkOffsets(..)

Posted by Sylvain Lebresne <sy...@datastax.com>.
On Mon, Oct 31, 2011 at 1:10 PM, Mick Semb Wever <mc...@apache.org> wrote:
> On Mon, 2011-10-31 at 13:05 +0100, Mick Semb Wever wrote:
>> Given a 60G sstable, even with 64kb chunk_length, to read just that one
>> sstable requires close to 8G free heap memory...
>
> Arg, that calculation was a little off...
>  (a long isn't exactly 8K...)
>
> But you get my concern...

Well, with a long being only 8 bytes, that's 8MB of free heap memory. Without
being negligible, that's not completely crazy to me.

No, the problem is that we create those 8MB for each reads, which *is* crazy
(the fact that we allocate those 8MB in one block is not very nice for
the GC either
but that's another problem).
Anyway, that's really a bug and I've created CASSANDRA-3427 to fix.

--
Sylvain

>
> ~mck
>
> --
> "When you say: "I wrote a program that crashed Windows", people just
> stare at you blankly and say: "Hey, I got those with the system -- for
> free."" Linus Torvalds
>
> | http://semb.wever.org | http://sesat.no |
> | http://tech.finn.no   | Java XSS Filter |
>

Re: problems with ojb 0.9.8

Posted by David Forslund <dw...@lanl.gov>.
At 08:51 PM 12/30/2002 +0100, Armin Waibel wrote:
>Hi again,
>
>----- Original Message -----
>From: "David Forslund" <dw...@lanl.gov>
>To: "OJB Users List" <oj...@jakarta.apache.org>; "OJB Users List"
><oj...@jakarta.apache.org>
>Sent: Monday, December 30, 2002 8:32 PM
>Subject: Re: problems with ojb 0.9.8
>
>
> > At 08:18 PM 12/30/2002 +0100, Armin Waibel wrote:
> > >Hi Dave,
> > >
> > >----- Original Message -----
> > >From: "David Forslund" <dw...@lanl.gov>
> > >To: "OJB Users List" <oj...@jakarta.apache.org>
> > >Sent: Monday, December 30, 2002 7:53 PM
> > >Subject: Re: problems with ojb 0.9.8
> > >
> > >
> > > > I see what the problem is, but am not sure what the solution is.
> > > >
> > > > I have a an abstract class that is implemented with a number of
> > >classes.
> > > > I'm trying to create a unique key for an instance class, but when
>I
> > > > check there are no field descriptors for the base class.
> > >
> > >Have you tried
> > >Class realClass = abstractBaseClass.getClass();
> > >ClassDescriptor cld = broker.getClassDescriptor(realClass);
> > >to get the real class descriptor? Then it should possible to get the
> > >field.
> >
> > This doesn't help because I'm just calling the getUniqueId within OJB
> > and I don't have any control over what it does except through
> > the repository.
>
>
>I do not understand this. You declare your 'valueId' as a autoincrement
>field, but in your stack trace it seems you do a direct call
>PB.getUniqueId?

Well I did add this because 0.9.8 was complaining about this field being
absent.  I have removed it without any change in the behavior.


> > >
> >>>org.apache.ojb.broker.singlevm.DelegatingPersistenceBroker.getUniqueI
> > > > >>> >
> > >gov.lanl.Database.OJBDatabaseMgr.getNextSeq(OJBDatabaseMgr.java:582)
> > > > >>> >          at
> > > > >>> >
> > > >
> > >
> >>>gov.lanl.COAS.AbstractObservationValue_.<init>(AbstractObservationVal
>
>Could you post a code snip to ease my understanding?
>But by the way this seems to be a bug.

I'm not sure what you mean by a code snippet.  When I call the class 
constructor,
I call getUniqueId with the class name and attribute:

This is the specific method I call .
/**
      * return next number in a persistent sequence
      */
     public long getNextSeq(Class clazz, String fieldName) {
         cat.debug("getNextSeq: "+clazz.getName() + " "+fieldName);
         // return sequenceManager.getUniqueId(clazz, fieldName);
         try {
             return broker.getUniqueId(clazz, fieldName);
         } catch (org.apache.ojb.broker.PersistenceBrokerException e) {
             cat.error("Can't get ID from broker: " + clazz.getName() + " " 
+ fieldName, e);

             // System.exit(1);
             return 0;
         }
     }

thanks,

Dave


>Armin
>
>
> >
> > >Or define your base class with all fields in the repository file and
> > >declare
> > >all extent-classes in the class-descriptor. Then the default sequence
> > >manager implementations should be able to generate a id unique
> > >across all extents.
> > >Or define only the abstract class with all extent-classes, then you
> > >should be
> > >able to get one of the extent classes.
> >
> > This is how I have it defined in my repository_user.xml
> >
> >    <class-descriptor class="gov.lanl.COAS.ObservationValue_">
> >      <extent-class class-ref="gov.lanl.COAS.Multimedia_"/>
> >      <extent-class class-ref="gov.lanl.COAS.NoInformation_"/>
> >      <extent-class class-ref="gov.lanl.COAS.Numeric_"/>
> >      <extent-class class-ref="gov.lanl.COAS.ObservationId_"/>
> >      <extent-class class-ref="gov.lanl.COAS.QualifiedCodeInfo_"/>
> >      <extent-class class-ref="gov.lanl.COAS.QualifiedPersonId_"/>
> >      <extent-class class-ref="gov.lanl.COAS.Range_"/>
> >      <extent-class class-ref="gov.lanl.COAS.String_"/>
> >      <extent-class class-ref="gov.lanl.COAS.TimeSpan_"/>
> >      <extent-class
>class-ref="gov.lanl.COAS.UniversalResourceIdentifier_"/>
> >      <extent-class class-ref="gov.lanl.COAS.Empty_"/>
> >    </class-descriptor>
> >
> > and an example for one of the extent classes
> >
> >   <class-descriptor
> >         isolation-level="read-uncommitted"
> >         class="gov.lanl.COAS.Empty_"
> >         table="OjbEmpty_"
> >   >
> >     <field-descriptor id="1"
> >         name="valueId"
> >         jdbc-type="INTEGER"
> >         column="valueId"
> >         primarykey="true"
> >         autoincrement="true"
> >     />
> >
> >   </class-descriptor>
> >
> > there is no table for the ObservationValue_ class because it is an
>Abstract
> > Class.
> > this is what I've been using for 0.9.7 and it works fine.  this fails
>under
> > 0.9.8
> > when trying to get a uniqueid for each of the extent classes.  I think
>this
> > is what
> > you are describing in your last suggestion.
> >
> > thanks,
> > Dave
> >
> >
> > >HTH
> > >regards,
> > >Armin
> > >
> > > >
> > > > This all worked fine in 0.9.7, but perhaps there has been some
>change
> > > > in the semantics?  We put the necessary table elements in each
> > >instance
> > > > of the class but not in the table for the base class (which
>actually
> > > > doesn't exist).
> > > >
> > > > Thanks,
> > > >
> > > > Dave
> > > >
> > > > At 11:35 AM 12/30/2002 -0700, David Forslund wrote:
> > > > >When I put a check inside of the getFieldDescriptor, I find that
>it
> > >is
> > > > >being called by HighLowSequence
> > > > >with the argument ojbConcreteClass and is returning a null for
>the
> > > > >field.  Is this what is expected?
> > > > >
> > > > >Dave
> > > > >
> > > > >At 10:43 AM 12/30/2002 -0700, David Forslund wrote:
> > > > >>It wasn't null in my code that called the OJB code.  This code
>has
> > >been
> > > > >>working fine in 0.9.7.    If the xml needed to change for some
> > >reason,
> > > > >>it might have caused this.  I'm passing in a string of a
>variable
> > >that
> > > > >>is defined in my table.   Whether OJB properly connects a
>"Field"
> > > > >>to that table is where the problem may be.   It did in the past
> > >without
> > > > >>any problem.   I have a hard time telling exactly what changed
> > >between
> > > > >>these two versions.
> > > > >>
> > > > >>Thanks,
> > > > >>
> > > > >>Dave
> > > > >>At 01:49 PM 12/30/2002 +0100, Armin Waibel wrote:
> > > > >>>Hi David,
> > > > >>>
> > > > >>>the sequence generator implementation now only generate
> > > > >>>id's for fields declared in the repository.
> > > > >>>I think you got this NullPointerException, because SM get a
> > > > >>>'null' field:
> > > > >>>
> > > > >>><snip SequenceManagerHelper>
> > > > >>>public static String buildSequenceName(
> > > > >>>PersistenceBroker brokerForClass, FieldDescriptor field)
> > > > >>>     {
> > > > >>>48--->!!! ClassDescriptor cldTargetClass =
> > >field.getClassDescriptor();
> > > > >>>                 String seqName = field.getSequenceName();
> > > > >>>.....
> > > > >>></snip>
> > > > >>>
> > > > >>>So check your code if the given FiledDescriptor wasn't null.
> > > > >>>
> > > > >>>HTH
> > > > >>>
> > > > >>>regards,
> > > > >>>Armin
> > > > >>>
> > > > >>>----- Original Message -----
> > > > >>>From: "David Forslund" <dw...@lanl.gov>
> > > > >>>To: "OJB Users List" <oj...@jakarta.apache.org>
> > > > >>>Sent: Monday, December 30, 2002 1:33 AM
> > > > >>>Subject: Re: problems with ojb 0.9.8
> > > > >>>
> > > > >>>
> > > > >>> > I'm trying to upgrade from 0.9.7 to 0.9.8 and am having some
> > >problems
> > > > >>>that
> > > > >>> > I don't understand yet.
> > > > >>> >
> > > > >>> > I'm getting the warning about not finding an autoincrement
> > >attribute
> > > > >>>for a
> > > > >>> > class.  I'm not sure when
> > > > >>> > I have to have an autoincrement attribute, but the
>primarykey
> > >for the
> > > > >>>class
> > > > >>> > I'm using is a varchar
> > > > >>> > so that autoincrement doesn't seem appropriate.
> > > > >>> >
> > > > >>> > Subsequently, I get an null pointer exception error in the
> > > > >>> > SequenceManagerHelper that I don't understand:
> > > > >>> > java.lang.NullPointerException
> > > > >>> >          at
> > > > >>> >
> > > >
> > >
> >>>org.apache.ojb.broker.util.sequence.SequenceManagerHelper.buildSequen
> > >ceN
> > > > >>>ame(SequenceManagerHelper.java:48)
> > > > >>> >          at
> > > > >>> >
> > > >
> > >
> >>>org.apache.ojb.broker.util.sequence.SequenceManagerHiLoImpl.getUnique
> > >Id(
> > > > >>>SequenceManagerHiLoImpl.java:49)
> > > > >>> >          at
> > > > >>> >
> > > >
> > >
> >>>org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.getUniqueId(Pers
> > >ist
> > > > >>>enceBrokerImpl.java:2258)
> > > > >>> >          at
> > > > >>> >
> > > >
> > >
> >>>org.apache.ojb.broker.singlevm.DelegatingPersistenceBroker.getUniqueI
> > >d(D
> > > > >>>elegatingPersistenceBroker.java:242)
> > > > >>> >          at
> > > > >>> >
> > >gov.lanl.Database.OJBDatabaseMgr.getNextSeq(OJBDatabaseMgr.java:582)
> > > > >>> >          at
> > > > >>> >
> > > >
> > >
> >>>gov.lanl.COAS.AbstractObservationValue_.<init>(AbstractObservationVal
> > >ue_
> > > > >>>.java:54)
> > > > >>> >          at gov.lanl.COAS.Empty_.<init>(Empty_.java:31)
> > > > >>> >
> > > > >>> > I'm pretty sure that it is being called correctly from my
>code
> > >(which
> > > > >>>works
> > > > >>> > fine in 0.9.7), but it is failing now.
> > > > >>> >
> > > > >>> > An unrelated warning in a different application is that OJB
>says
> > >I
> > > > >>>should
> > > > >>> > use addLike() for using LIKE, but it
> > > > >>> > seems to use the right code anyway.  Is this just a
>deprecation
> > >issue?
> > > > >>>I
> > > > >>> > don't see why it bothers
> > > > >>> > to tell me this, if it can figure what to do anyway.
> > > > >>> >
> > > > >>> > Thanks,
> > > > >>> >
> > > > >>> > Dave
> > > > >>> >
> > > > >>> >
> > > > >>> > --
> > > > >>> > To unsubscribe, e-mail:
> > > > >>><ma...@jakarta.apache.org>
> > > > >>> > For additional commands, e-mail:
> > > > >>><ma...@jakarta.apache.org>
> > > > >>> >
> > > > >>> >
> > > > >>> >
> > > > >>>
> > > > >>>
> > > > >>>--
> > > > >>>To unsubscribe, e-mail:
> > ><ma...@jakarta.apache.org>
> > > > >>>For additional commands, e-mail:
> > ><ma...@jakarta.apache.org>
> > > > >>
> > > > >>
> > > > >>--
> > > > >>To unsubscribe, e-mail:
> > ><ma...@jakarta.apache.org>
> > > > >>For additional commands, e-mail:
> > ><ma...@jakarta.apache.org>
> > > > >
> > > > >
> > > > >--
> > > > >To unsubscribe, e-mail:
> > ><ma...@jakarta.apache.org>
> > > > >For additional commands, e-mail:
> > ><ma...@jakarta.apache.org>
> > > >
> > > >
> > > > --
> > > > To unsubscribe, e-mail:
> > ><ma...@jakarta.apache.org>
> > > > For additional commands, e-mail:
> > ><ma...@jakarta.apache.org>
> > > >
> > > >
> > > >
> > >
> > >
> > >--
> > >To unsubscribe, e-mail:
><ma...@jakarta.apache.org>
> > >For additional commands, e-mail:
><ma...@jakarta.apache.org>
> >
> >
> > --
> > To unsubscribe, e-mail:
><ma...@jakarta.apache.org>
> > For additional commands, e-mail:
><ma...@jakarta.apache.org>
> >
> >
> >
>
>
>--
>To unsubscribe, e-mail:   <ma...@jakarta.apache.org>
>For additional commands, e-mail: <ma...@jakarta.apache.org>


Re: problems with ojb 0.9.8

Posted by Armin Waibel <ar...@code-au-lait.de>.
Hi again,

----- Original Message -----
From: "David Forslund" <dw...@lanl.gov>
To: "OJB Users List" <oj...@jakarta.apache.org>; "OJB Users List"
<oj...@jakarta.apache.org>
Sent: Monday, December 30, 2002 8:32 PM
Subject: Re: problems with ojb 0.9.8


> At 08:18 PM 12/30/2002 +0100, Armin Waibel wrote:
> >Hi Dave,
> >
> >----- Original Message -----
> >From: "David Forslund" <dw...@lanl.gov>
> >To: "OJB Users List" <oj...@jakarta.apache.org>
> >Sent: Monday, December 30, 2002 7:53 PM
> >Subject: Re: problems with ojb 0.9.8
> >
> >
> > > I see what the problem is, but am not sure what the solution is.
> > >
> > > I have a an abstract class that is implemented with a number of
> >classes.
> > > I'm trying to create a unique key for an instance class, but when
I
> > > check there are no field descriptors for the base class.
> >
> >Have you tried
> >Class realClass = abstractBaseClass.getClass();
> >ClassDescriptor cld = broker.getClassDescriptor(realClass);
> >to get the real class descriptor? Then it should possible to get the
> >field.
>
> This doesn't help because I'm just calling the getUniqueId within OJB
> and I don't have any control over what it does except through
> the repository.


I do not understand this. You declare your 'valueId' as a autoincrement
field, but in your stack trace it seems you do a direct call
PB.getUniqueId?

> >
>>>org.apache.ojb.broker.singlevm.DelegatingPersistenceBroker.getUniqueI
> > > >>> >
> >gov.lanl.Database.OJBDatabaseMgr.getNextSeq(OJBDatabaseMgr.java:582)
> > > >>> >          at
> > > >>> >
> > >
> >
>>>gov.lanl.COAS.AbstractObservationValue_.<init>(AbstractObservationVal

Could you post a code snip to ease my understanding?
But by the way this seems to be a bug.

Armin


>
> >Or define your base class with all fields in the repository file and
> >declare
> >all extent-classes in the class-descriptor. Then the default sequence
> >manager implementations should be able to generate a id unique
> >across all extents.
> >Or define only the abstract class with all extent-classes, then you
> >should be
> >able to get one of the extent classes.
>
> This is how I have it defined in my repository_user.xml
>
>    <class-descriptor class="gov.lanl.COAS.ObservationValue_">
>      <extent-class class-ref="gov.lanl.COAS.Multimedia_"/>
>      <extent-class class-ref="gov.lanl.COAS.NoInformation_"/>
>      <extent-class class-ref="gov.lanl.COAS.Numeric_"/>
>      <extent-class class-ref="gov.lanl.COAS.ObservationId_"/>
>      <extent-class class-ref="gov.lanl.COAS.QualifiedCodeInfo_"/>
>      <extent-class class-ref="gov.lanl.COAS.QualifiedPersonId_"/>
>      <extent-class class-ref="gov.lanl.COAS.Range_"/>
>      <extent-class class-ref="gov.lanl.COAS.String_"/>
>      <extent-class class-ref="gov.lanl.COAS.TimeSpan_"/>
>      <extent-class
class-ref="gov.lanl.COAS.UniversalResourceIdentifier_"/>
>      <extent-class class-ref="gov.lanl.COAS.Empty_"/>
>    </class-descriptor>
>
> and an example for one of the extent classes
>
>   <class-descriptor
>         isolation-level="read-uncommitted"
>         class="gov.lanl.COAS.Empty_"
>         table="OjbEmpty_"
>   >
>     <field-descriptor id="1"
>         name="valueId"
>         jdbc-type="INTEGER"
>         column="valueId"
>         primarykey="true"
>         autoincrement="true"
>     />
>
>   </class-descriptor>
>
> there is no table for the ObservationValue_ class because it is an
Abstract
> Class.
> this is what I've been using for 0.9.7 and it works fine.  this fails
under
> 0.9.8
> when trying to get a uniqueid for each of the extent classes.  I think
this
> is what
> you are describing in your last suggestion.
>
> thanks,
> Dave
>
>
> >HTH
> >regards,
> >Armin
> >
> > >
> > > This all worked fine in 0.9.7, but perhaps there has been some
change
> > > in the semantics?  We put the necessary table elements in each
> >instance
> > > of the class but not in the table for the base class (which
actually
> > > doesn't exist).
> > >
> > > Thanks,
> > >
> > > Dave
> > >
> > > At 11:35 AM 12/30/2002 -0700, David Forslund wrote:
> > > >When I put a check inside of the getFieldDescriptor, I find that
it
> >is
> > > >being called by HighLowSequence
> > > >with the argument ojbConcreteClass and is returning a null for
the
> > > >field.  Is this what is expected?
> > > >
> > > >Dave
> > > >
> > > >At 10:43 AM 12/30/2002 -0700, David Forslund wrote:
> > > >>It wasn't null in my code that called the OJB code.  This code
has
> >been
> > > >>working fine in 0.9.7.    If the xml needed to change for some
> >reason,
> > > >>it might have caused this.  I'm passing in a string of a
variable
> >that
> > > >>is defined in my table.   Whether OJB properly connects a
"Field"
> > > >>to that table is where the problem may be.   It did in the past
> >without
> > > >>any problem.   I have a hard time telling exactly what changed
> >between
> > > >>these two versions.
> > > >>
> > > >>Thanks,
> > > >>
> > > >>Dave
> > > >>At 01:49 PM 12/30/2002 +0100, Armin Waibel wrote:
> > > >>>Hi David,
> > > >>>
> > > >>>the sequence generator implementation now only generate
> > > >>>id's for fields declared in the repository.
> > > >>>I think you got this NullPointerException, because SM get a
> > > >>>'null' field:
> > > >>>
> > > >>><snip SequenceManagerHelper>
> > > >>>public static String buildSequenceName(
> > > >>>PersistenceBroker brokerForClass, FieldDescriptor field)
> > > >>>     {
> > > >>>48--->!!! ClassDescriptor cldTargetClass =
> >field.getClassDescriptor();
> > > >>>                 String seqName = field.getSequenceName();
> > > >>>.....
> > > >>></snip>
> > > >>>
> > > >>>So check your code if the given FiledDescriptor wasn't null.
> > > >>>
> > > >>>HTH
> > > >>>
> > > >>>regards,
> > > >>>Armin
> > > >>>
> > > >>>----- Original Message -----
> > > >>>From: "David Forslund" <dw...@lanl.gov>
> > > >>>To: "OJB Users List" <oj...@jakarta.apache.org>
> > > >>>Sent: Monday, December 30, 2002 1:33 AM
> > > >>>Subject: Re: problems with ojb 0.9.8
> > > >>>
> > > >>>
> > > >>> > I'm trying to upgrade from 0.9.7 to 0.9.8 and am having some
> >problems
> > > >>>that
> > > >>> > I don't understand yet.
> > > >>> >
> > > >>> > I'm getting the warning about not finding an autoincrement
> >attribute
> > > >>>for a
> > > >>> > class.  I'm not sure when
> > > >>> > I have to have an autoincrement attribute, but the
primarykey
> >for the
> > > >>>class
> > > >>> > I'm using is a varchar
> > > >>> > so that autoincrement doesn't seem appropriate.
> > > >>> >
> > > >>> > Subsequently, I get an null pointer exception error in the
> > > >>> > SequenceManagerHelper that I don't understand:
> > > >>> > java.lang.NullPointerException
> > > >>> >          at
> > > >>> >
> > >
> >
>>>org.apache.ojb.broker.util.sequence.SequenceManagerHelper.buildSequen
> >ceN
> > > >>>ame(SequenceManagerHelper.java:48)
> > > >>> >          at
> > > >>> >
> > >
> >
>>>org.apache.ojb.broker.util.sequence.SequenceManagerHiLoImpl.getUnique
> >Id(
> > > >>>SequenceManagerHiLoImpl.java:49)
> > > >>> >          at
> > > >>> >
> > >
> >
>>>org.apache.ojb.broker.singlevm.PersistenceBrokerImpl.getUniqueId(Pers
> >ist
> > > >>>enceBrokerImpl.java:2258)
> > > >>> >          at
> > > >>> >
> > >
> >
>>>org.apache.ojb.broker.singlevm.DelegatingPersistenceBroker.getUniqueI
> >d(D
> > > >>>elegatingPersistenceBroker.java:242)
> > > >>> >          at
> > > >>> >
> >gov.lanl.Database.OJBDatabaseMgr.getNextSeq(OJBDatabaseMgr.java:582)
> > > >>> >          at
> > > >>> >
> > >
> >
>>>gov.lanl.COAS.AbstractObservationValue_.<init>(AbstractObservationVal
> >ue_
> > > >>>.java:54)
> > > >>> >          at gov.lanl.COAS.Empty_.<init>(Empty_.java:31)
> > > >>> >
> > > >>> > I'm pretty sure that it is being called correctly from my
code
> >(which
> > > >>>works
> > > >>> > fine in 0.9.7), but it is failing now.
> > > >>> >
> > > >>> > An unrelated warning in a different application is that OJB
says
> >I
> > > >>>should
> > > >>> > use addLike() for using LIKE, but it
> > > >>> > seems to use the right code anyway.  Is this just a
deprecation
> >issue?
> > > >>>I
> > > >>> > don't see why it bothers
> > > >>> > to tell me this, if it can figure what to do anyway.
> > > >>> >
> > > >>> > Thanks,
> > > >>> >
> > > >>> > Dave
> > > >>> >
> > > >>> >
> > > >>> > --
> > > >>> > To unsubscribe, e-mail:
> > > >>><ma...@jakarta.apache.org>
> > > >>> > For additional commands, e-mail:
> > > >>><ma...@jakarta.apache.org>
> > > >>> >
> > > >>> >
> > > >>> >
> > > >>>
> > > >>>
> > > >>>--
> > > >>>To unsubscribe, e-mail:
> ><ma...@jakarta.apache.org>
> > > >>>For additional commands, e-mail:
> ><ma...@jakarta.apache.org>
> > > >>
> > > >>
> > > >>--
> > > >>To unsubscribe, e-mail:
> ><ma...@jakarta.apache.org>
> > > >>For additional commands, e-mail:
> ><ma...@jakarta.apache.org>
> > > >
> > > >
> > > >--
> > > >To unsubscribe, e-mail:
> ><ma...@jakarta.apache.org>
> > > >For additional commands, e-mail:
> ><ma...@jakarta.apache.org>
> > >
> > >
> > > --
> > > To unsubscribe, e-mail:
> ><ma...@jakarta.apache.org>
> > > For additional commands, e-mail:
> ><ma...@jakarta.apache.org>
> > >
> > >
> > >
> >
> >
> >--
> >To unsubscribe, e-mail:
<ma...@jakarta.apache.org>
> >For additional commands, e-mail:
<ma...@jakarta.apache.org>
>
>
> --
> To unsubscribe, e-mail:
<ma...@jakarta.apache.org>
> For additional commands, e-mail:
<ma...@jakarta.apache.org>
>
>
>


Re: spamassassin setup

Posted by Dhaval Patel <dh...@patel.sh>.
> SpamAssassin comes with a whole bunch of rules by default.
> The best thing is to look at those rules and see what they're
> doing.  There's probably real documentation somewhere, but
> there is so much example code that you may not need it.

I did not see much in the local.cf after a fresh installation. I went to
http://www.yrex.com/spam/spamconfig.php to generate my config file. 

> 
> > So to see if an ip or hostname is in the RBL it would make a request to the RBL servers
> > on port 53 just like DNS queries?
> 
> It's not just like regular DNS queries.  It *is* a regular DNS
> query.  It doesn't go against any extra, third-party servers.
> I believe SpamAssassin uses its own resolver code, but it
> looks at /etc/resolv.conf just like anything else and uses
> the nameserver (nameservers?) it finds in there.

Thanks for the clear up. But one more question about this. If it users my DNS servers,
how does it query the RBL servers and give them the hostname or ip?

> 
> The corollary to this is that you will need to make sure your
> existing DNS server can handle the load.  But if you have only
> 15 users, you should be OK on that.  :-)

I already checked with my hosting provider and they said its OK.



Thanks,
Dhaval

Re: OOM on CompressionMetadata.readChunkOffsets(..)

Posted by Mick Semb Wever <mc...@apache.org>.
On Mon, 2011-10-31 at 13:05 +0100, Mick Semb Wever wrote:
> Given a 60G sstable, even with 64kb chunk_length, to read just that one
> sstable requires close to 8G free heap memory... 

Arg, that calculation was a little off...
 (a long isn't exactly 8K...)

But you get my concern...

~mck

-- 
"When you say: "I wrote a program that crashed Windows", people just
stare at you blankly and say: "Hey, I got those with the system -- for
free."" Linus Torvalds 

| http://semb.wever.org | http://sesat.no |
| http://tech.finn.no   | Java XSS Filter |

Re: Translators needed for Wizard Templates Titles

Posted by Xuacu <xu...@gmail.com>.
Hi Ariel, all

2013/5/16 Ariel Constenla-Haile <ar...@apache.org>:
>
> The bug is for templates that are already localized, see for example
> http://svn.apache.org/viewvc/openoffice/trunk/main/extras/source/templates/wizard/letter/lang/
>

I guess I have something to work on to take advantage of this translation... ;)

All the best
..
Xuacu

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@openoffice.apache.org
For additional commands, e-mail: dev-help@openoffice.apache.org


Re: Translators needed for Wizard Templates Titles

Posted by Ariel Constenla-Haile <ar...@apache.org>.
Hi Xuacu,

On Thu, May 16, 2013 at 06:04:19PM +0200, Xuacu wrote:
> Done for Asturian language, even if it's not in the missing languages
> list: https://issues.apache.org/ooo/attachment.cgi?id=80700
> 
> All the best

The bug is for templates that are already localized, see for example
http://svn.apache.org/viewvc/openoffice/trunk/main/extras/source/templates/wizard/letter/lang/

Unfortunately, ast is not in the list, this means they have to be
localized, following http://wiki.openoffice.org/wiki/Localization/Extras

In the case for Writer based templates, people have to translate content
inside the document; while they are at it, also translate the title and
description in File - Properties...

For Impress Layout and Presentation templates, people have to translate
title and descriptions from File - Properties... for both, and slide
names for presentation templates; I will try to find a way to put all
strings in a property file for these file - IMO it is simpler this way.


Regards
-- 
Ariel Constenla-Haile
La Plata, Argentina

Re: Translators needed for Wizard Templates Titles

Posted by Xuacu <xu...@gmail.com>.
Done for Asturian language, even if it's not in the missing languages
list: https://issues.apache.org/ooo/attachment.cgi?id=80700

All the best
--
Xuacu

2013/5/16 Ariel Constenla-Haile <ar...@apache.org>:
>
> Just a reminder that (after more than 8 months) translations are still
> missing.
>
>
> Regards
> --
> Ariel Constenla-Haile
> La Plata, Argentina

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@openoffice.apache.org
For additional commands, e-mail: dev-help@openoffice.apache.org


CLI caching, etc (was Re: New error handling)

Posted by Upayavira <uv...@upaya.co.uk>.
Vadim,

> >>1. Implement setStatus() in AbstractCommandLineEnvironment 
> >>(implementation is empty right now)
> >>2. Add getStatus() to the AbstractCommandLineEnvironment
> >>3. Test getStatus() in the CLI crawling code.
> >>4. Test how it works and fix the broken link :)

Works a treat! Thanks. Although I had to modify the sitemap to give error codes 
(thanks Jeremy for your recent mail!)

> Not will, but does! This was done long time ago (for http), otherwise
> how you will get 404 in the browser? :)

That's kinda what I meant ;-) 

> >Similarly, based upon comments from Nicola Ken ages ago:
> >
> >>>In the Environment there is
> >>>
> >>>    boolean isResponseModified(long lastModified);
> >>>    void setResponseIsNotModified();
> >>>
> >>>But it's never implemented. In AbstractEnvironment:
> >>>
> >>>    public boolean isResponseModified(long lastModified) {
> >>>        return true; // always modified
> >>>    }
> >>>
> >>>    public void setResponseIsNotModified() {
> >>>        // does nothing
> >>>    }
> >
> >Similarly, the setResponseIsNotModified() will be called on the
> >current environment if a response was read from the cache. At
> >present, this method does nothing.

> Before you go further with this... Look at method isResponseModified()
> in [1].
>  
> What you need to do is to:
> 1. Implement method isResponseModified() for command line environment.
> 2. In the CLI, get the file corresponding to the request URI, and get
> its last modification time. 3. Populate environment with this
> modification time (this will be similar to If-Modified-Since date
> header in http). 4. Call cocoon. It will skip generation if response
> is not modified, and won't even read it from cache.

Very interesting. So Cocoon can tell me if something has been modified. Great. 

However, if the Bean is able to send pages to various locations, it might not be able to 
identify when a page was generated without network traffic (e.g when using FTP). 
This would be unfortunate, as a large site could involve a lot of network traffic, and 
the point of this is to avoid that.

I could store locally (in my own hashed up cache) the last modified date for the page 
and the list of links within the page, each time a page is generated. That way, when I 
am about to generate a page, I can easily get its timestamp. If I find that I don't need 
to generate the page, I can use my locally held list of links to follow.

Does this seem reasonable?

And finally, I have got code working to make the CLI use ModifiableSources rather 
than Destination objects. Do you think I need to support the Destination interface still 
(and deprecate it), or can I just delete it entirely?

Once I've got this going, I'll get on with attempting a VFS ModifiableSource (probably 
once I've had a three week holiday in South Africa!).

Thanks again.

Regards, Upayavira



Re: New error handling

Posted by Vadim Gritsenko <va...@verizon.net>.
Upayavira wrote:

>>>Is there a way that I can find out that a page has failed (either
>>>through an exception or an error code) so that I can prevent the CLI
>>>      
>>>
>>>from just accept the default page served back?
>>    
>>
>
>  
>
>>I would recommend:
>>
>>1. Implement setStatus() in AbstractCommandLineEnvironment 
>>(implementation is empty right now)
>>2. Add getStatus() to the AbstractCommandLineEnvironment
>>3. Test getStatus() in the CLI crawling code.
>>4. Test how it works and fix the broken link :)
>>    
>>
>
>So you're saying that the treeprocessor (or whatever) will 
>

Not will, but does! This was done long time ago (for http), otherwise 
how you will get 404 in the browser? :)


>call the setStatus() method 
>on the environment to tell it whether page generation succeeded. Then all I need to 
>do is store that within the environment and use it. Is this correct?
>

Yes, this is correct.


>Similarly, based upon comments from Nicola Ken ages ago:
>
>  
>
>>>In the Environment there is
>>>
>>>    boolean isResponseModified(long lastModified);
>>>    void setResponseIsNotModified();
>>>
>>>But it's never implemented. In AbstractEnvironment:
>>>
>>>    public boolean isResponseModified(long lastModified) {
>>>        return true; // always modified
>>>    }
>>>
>>>    public void setResponseIsNotModified() {
>>>        // does nothing
>>>    }
>>>      
>>>
>
>Similarly, the setResponseIsNotModified() will be called on the current environment if 
>a response was read from the cache. At present, this method does nothing.
>

Before you go further with this... Look at method isResponseModified() 
in [1].


>However, 
>if I get the environment to store something based upon this, then the CLI can know 
>whether or not to bother saving the page?
>

What you need to do is to:
1. Implement method isResponseModified() for command line environment.
2. In the CLI, get the file corresponding to the request URI, and get 
its last modification time.
3. Populate environment with this modification time (this will be 
similar to If-Modified-Since date header in http).
4. Call cocoon. It will skip generation if response is not modified, and 
won't even read it from cache.


>Thanks for this.
>  
>

You are welcome.

Vadim

[1] 
http://cvs.apache.org/viewcvs.cgi/cocoon-2.1/src/java/org/apache/cocoon/environment/http/HttpEnvironment.java?rev=1.3&content-type=text/vnd.viewcvs-markup



Re: Switching cache to Persistent Store

Posted by Vadim Gritsenko <va...@verizon.net>.
Upayavira wrote:

> ...
>  
>
>>Vadim's already answered you on that but another point is that I'm
>>pretty sure there's nothing wrong with the Store(s) or Cache because I
>>don't see this happening in the webapp.  You could prove this to
>>yourself by configuring the max-objects param for transient-store in
>>cocoon.xconf to a very small number (like 5) and then watching items
>>go into the persistent storage by using the webapp some.  You can then
>>use the sample to clear the MRU store and you'll see that you can
>>still get cached responses out of the persistent-store.
>>    
>>
>
>I haven't tried all that yet, but I have just got my webapp working via Tomcat rather 
>than Jetty and:
>* load a sample page
>* load status page - contains stuff in MRUMemoryStore
>* restart Tomcat and wait
>* reload status page - MRU and Default stores are both empty
>
>I'm happy that things might make it into the persistent store during the life of a 
>particular invocation of the servlet container. What I'm complaining about is that the 
>persistent store doesn't seem to survive a restart of the servlet container/Cocoon.
>

There was a bug in DefaultStore.java. Should be fixed now; try again. 
Same bug was present in excalibur-store too.

Vadim



Re: Switching cache to Persistent Store

Posted by Upayavira <uv...@upaya.co.uk>.
 ...
> 
> Vadim's already answered you on that but another point is that I'm
> pretty sure there's nothing wrong with the Store(s) or Cache because I
> don't see this happening in the webapp.  You could prove this to
> yourself by configuring the max-objects param for transient-store in
> cocoon.xconf to a very small number (like 5) and then watching items
> go into the persistent storage by using the webapp some.  You can then
> use the sample to clear the MRU store and you'll see that you can
> still get cached responses out of the persistent-store.

I haven't tried all that yet, but I have just got my webapp working via Tomcat rather 
than Jetty and:
* load a sample page
* load status page - contains stuff in MRUMemoryStore
* restart Tomcat and wait
* reload status page - MRU and Default stores are both empty

I'm happy that things might make it into the persistent store during the life of a 
particular invocation of the servlet container. What I'm complaining about is that the 
persistent store doesn't seem to survive a restart of the servlet container/Cocoon. 

Regards, Upayavira


Re: Switching cache to Persistent Store

Posted by Geoff Howard <co...@leverageweb.com>.
Upayavira wrote:
> Geoff,
> 

>>  - No cache element is needed in cocoon.xconf because it's defined in
>>  cocoon.roles (and if you just removed the event-cache by deleting the
>>  xconf entry you probably didn't really remove it) - The cache role is
>>  a wrapper around transient-store and persistent-store which come
>>  predifined in cocoon.xconf
> 
> Okay. All I was doing was switching from transient to persistent, but 
> your explanation below tells me I don't need to do that.
>>  These components work together like this:
>>
>>  Cache wraps transient-store which uses persistent-store (if and only
>>  if *<parameter name="use-persistent-cache" value="true"/> is set on
>>  transient-store in cocoon.xconf which it is by default)
>>
>>  All new items go into the transient-store, and if configured, older
>>  items are moved into persistent-store when they are bumped off the
>>  bottom of the stack, or on container shutdown.  All this happens by
>>  default out of the box and is working correctly (unless you
>>  delete/fail to reuse the cache-dir as Jetty does) for the servlet
>>  mode.
> 
> As far as I can see, I reuse the cache-dir. Or at least, there's a file 
> in the cache-dir that gets updated, and I use the same cache-dir each time.

...

> Anyway, I've done some more research, including downloading the source 
> for Avalon (for the first time!) and stepped through the code for the 
> Store.

...

Vadim's already answered you on that but another point is that I'm 
pretty sure there's nothing wrong with the Store(s) or Cache because I 
don't see this happening in the webapp.  You could prove this to 
yourself by configuring the max-objects param for transient-store in 
cocoon.xconf to a very small number (like 5) and then watching items go 
into the persistent storage by using the webapp some.  You can then use 
the sample to clear the MRU store and you'll see that you can still get 
cached responses out of the persistent-store.

Let me know how that turns out.

Geoff


Re: Submit your Geode Summit 2019 session proposals, register to attend

Posted by "paul.perez" <pa...@pymma.com>.
  
  
Hi Jag
  

  
I understand that organising a complete submit in Europe is difficult and expensive.
  

  
But organising    2 days local seminar in London Paris or Brussels    with less than 100 people is not so complex and expensive. We already organised one on Openesb few years ago and Pymma budgets are far from the Pivotal ones.
  

  
It is something simple where Geode developers and user can meet. Between lecture or conference we let time for socialisation and networking.   
  

  
I really we encourage Pivotal to organise such events.
  

  
Going forward with simple and cheap events would be certainly more efficient than waiting few years to get a submit in Europe.
  

  
Hope this can inspire vocation
  

  
Regards
  
Paul
  

  
  

  
  
  
  
  
>   
> On 1 Apr 2019 at 23:05,  <Jagdish Mirani>  wrote:
>   
>   
> Hey Paul:  
> We couldn't make this happen, but this is still on our wish list. It requires quite a bit of logistics support from the events team, and they are already knee deep in supporting various events. We'll keep on it and hopefully be able to host a Euro Geode Summit in the not too distant future.
>   
> Jag
>   
>   
>   
>   
> On Wed, Mar 6, 2019 at 8:48 AM Paul Perez  <pa...@pymma.com>  wrote:
>   
> >   
> >   
> >   
> >
> > Hello All, Hello Jag,  
> >
> >   
> >
> >   
> >
> >   
> >
> > I notice we are still waiting for a European summit which was promised last year as far as I remember.
> >
> >   
> >
> >  😉
> >
> >   
> >
> >   
> >
> >   
> >
> > Best regards
> >
> >   
> >
> >   
> >
> >   
> >
> > Paul  
> >
> >   
> >
> >   
> >
> >   
> >
> > From:   Mark Secrist  <ms...@pivotal.io>
> >   Sent:  06 March 2019 16:25
> >   To:   user@geode.apache.org
> >   Cc:  geode  <de...@geode.apache.org>; Jared Ruckle  <jr...@pivotal.io>
> >   Subject:  Re: Submit your Geode Summit 2019 session proposals, register to attend
> >
> >   
> >
> >   
> >
> >   
> >   
> >
> > Hey Jag - are there any particular learning topics that you would like to see? I can think of any number of items that could be a derivative of one of our training courses. I expect there will be a number of great topics covered by the product team, including the standard SDG session from John.   
> >
> >   
> >   
> >
> >   
> >
> >   
> >   
> >   
> >
> > Mark
> >
> >   
> >   
> >   
> >
> >   
> >
> >   
> >   
> >   
> >
> > On Mon, Mar 4, 2019 at 10:11 AM Jagdish Mirani  <jm...@pivotal.io>  wrote:
> >
> >   
> >   
> > >   
> > >   
> > >   
> > >   
> > >
> > > Hello   Apache Geode community:
> > >
> > >   
> > >   
> > >   
> > >   
> > >
> > > Join us this October 7-10 for our fourth Geode Summit. As with prior years, the Geode Summit is being held in conjunction with the  SpringOne Platform conference, this time in Austin Texas on Oct 7-10, 2019.
> > >
> > >   
> > >   
> > >   
> > >
> > >   
> > >
> > >   
> > >   
> > >   
> > >
> > > This is a great opportunity to share your Geode knowledge, success stories, and best practices. It's also a great opportunity to learn from others. In prior years we've seen a tremendous amount of useful knowledge shared by the community (videos:  2018,  2017,  2016).   
> > >
> > >   
> > >   
> > >   
> > >
> > >   
> > >
> > >   
> > >   
> > >   
> > >
> > > The  call for papers  is now open - just indicate Geode as the topic when you make your submission.
> > >
> > >   
> > >   
> > >   
> > >
> > >   
> > >
> > >   
> > >   
> > >   
> > >
> > > Interested in Attending?
> > >
> > >   
> > >   
> > >   
> > >
> > > Even if you're not presenting, it would be great if you could still attend.
> > >
> > >   
> > >   
> > >   
> > >
> > >   
> > >
> > >   
> > >   
> > >   
> > >
> > > As before, there will be a special contiguous half-day block of Geode sessions on the Monday of the conference (Monday Oct 7th, from 1-6PM), followed by a number of Geode sessions on Tues-Thurs of the same week. There are two ways to attend:
> > >
> > >   
> > >   
> > >   
> > >   
> > > A full conference registration entitles you to attend any of the Geode sessions, including the Monday Oct 7th, half day Geode block. The full conference pass prices do go up over time, so it's important to register early. In addition to the early bird discount, you can use the following discount code for an additional $200 off the full conference pass:   S1P200_JMirani.
> > >   
> > > We've added a reduced price Monday only option for those who only want to attend the Monday Geode sessions.
> > >   
> > >   
> > >   
> > >   
> > >
> > > Here's the registration link.
> > >
> > >   
> > >   
> > >   
> > >
> > >   
> > >
> > >   
> > >   
> > >   
> > >
> > > We hope to see you in Austin this fall!
> > >
> > >   
> > >   
> > >   
> > >
> > >   
> > >
> > >   
> > >   
> > >   
> > >   
> > >   
> > >
> > > Regards
> > >
> > >   
> > >   
> > >
> > > Jag
> > >
> > >   
> > >   
> > >   
> > >   
> > >   
> > >   
> > >   
> > >   
> >   
> >   
> >
> >
> >
> >
> >   
> >   
> >
> >   
> >
> >   
> >   
> >
> > --
> >
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >
> > Mark Secrist | Director,   Global Education Delivery
> >
> >   
> >
> > msecrist@pivotal.io
> >
> >   
> >
> > 970.214.4567   Mobile
> >
> >   
> >
> >   
> >
> >   
> >
> >     pivotal.io
> >
> >   
> >
> >  Follow Us:   Twitter   |   LinkedIn   |   Facebook   |   YouTube   |   Google+
> >
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
> >   
>   
>   
>   
>
>  --
>   
>   
> Regards  
> Jag
>   
>   
>   
     

Re: Submit your Geode Summit 2019 session proposals, register to attend

Posted by Jagdish Mirani <jm...@pivotal.io>.
Hey Paul:
We couldn't make this happen, but this is still on our wish list. It
requires quite a bit of logistics support from the events team, and they
are already knee deep in supporting various events. We'll keep on it and
hopefully be able to host a Euro Geode Summit in the not too distant future.
Jag

On Wed, Mar 6, 2019 at 8:48 AM Paul Perez <pa...@pymma.com> wrote:

> Hello All, Hello Jag,
>
>
>
> I notice we are still waiting for a European summit which was promised
> last year as far as I remember.
>
> 😉
>
>
>
> Best regards
>
>
>
> Paul
>
>
>
> *From:* Mark Secrist <ms...@pivotal.io>
> *Sent:* 06 March 2019 16:25
> *To:* user@geode.apache.org
> *Cc:* geode <de...@geode.apache.org>; Jared Ruckle <jr...@pivotal.io>
> *Subject:* Re: Submit your Geode Summit 2019 session proposals, register
> to attend
>
>
>
> Hey Jag - are there any particular learning topics that you would like to
> see? I can think of any number of items that could be a derivative of one
> of our training courses. I expect there will be a number of great topics
> covered by the product team, including the standard SDG session from John.
>
>
>
> Mark
>
>
>
> On Mon, Mar 4, 2019 at 10:11 AM Jagdish Mirani <jm...@pivotal.io> wrote:
>
> Hello Apache Geode community:
>
> Join us this October 7-10 for our fourth Geode Summit. As with prior
> years, the Geode Summit is being held in conjunction with the SpringOne
> Platform conference <https://springoneplatform.io/>, this time in Austin
> Texas on Oct 7-10, 2019.
>
>
>
> This is a great opportunity to share your Geode knowledge, success
> stories, and best practices. It's also a great opportunity to learn from
> others. In prior years we've seen a tremendous amount of useful knowledge
> shared by the community (videos: 2018
> <https://www.youtube.com/playlist?list=PL62pIycqXx-ShuJ1YpV2wlmNZodnqzY8T>,
> 2017
> <https://www.youtube.com/playlist?list=PL62pIycqXx-QfmNrUmfKoTZXKU5K90JpK>,
> 2016
> <https://www.youtube.com/playlist?list=PL62pIycqXx-RCEP2APj4Mq9Oeyvskj_0K>
> ).
>
>
>
> *The call for papers <https://springoneplatform.io/2019/cfp> is now open -
> just indicate Geode as the topic when you make your submission.*
>
>
>
> *Interested in Attending?*
>
> Even if you're not presenting, it would be great if you could still attend.
>
>
>
> As before, there will be a special contiguous half-day block of Geode
> sessions on the Monday of the conference (Monday Oct 7th, from 1-6PM),
> followed by a number of Geode sessions on Tues-Thurs of the same week.
> There are two ways to attend:
>
>    - A full conference registration entitles you to attend any of the
>    Geode sessions, including the Monday Oct 7th, half day Geode block. The
>    full conference pass prices do go up over time, so it's important to
>    register early. In addition to the early bird discount, you can use the
>    following discount code for an additional $200 off the full conference
>    pass: *S1P200_JMirani*.
>    - We've added a reduced price Monday only option for those who only
>    want to attend the Monday Geode sessions.
>
> Here's the registration link <https://springoneplatform.io/register>.
>
>
>
> We hope to see you in Austin this fall!
>
>
>
> Regards
>
> Jag
>
>
>
>
> --
>
> *Mark Secrist | Director**, **Global Education Delivery*
>
> msecrist@pivotal.io
>
> 970.214.4567 Mobile
>
>
>
>   *pivotal.io <http://www.pivotal.io/>*
>
> Follow Us: Twitter <http://www.twitter.com/pivotal> | LinkedIn
> <http://www.linkedin.com/company/pivotalsoftware> | Facebook
> <http://www.facebook.com/pivotalsoftware> | YouTube
> <http://www.youtube.com/gopivotal> | Google+
> <https://plus.google.com/105320112436428794490>
>


-- 
Regards
Jag

RE: Submit your Geode Summit 2019 session proposals, register to attend

Posted by Paul Perez <pa...@pymma.com>.
Hello All, Hello Jag, 

 

I notice we are still waiting for a European summit which was promised last year as far as I remember.

😉

 

Best regards

 

Paul 

 

From: Mark Secrist <ms...@pivotal.io> 
Sent: 06 March 2019 16:25
To: user@geode.apache.org
Cc: geode <de...@geode.apache.org>; Jared Ruckle <jr...@pivotal.io>
Subject: Re: Submit your Geode Summit 2019 session proposals, register to attend

 

Hey Jag - are there any particular learning topics that you would like to see? I can think of any number of items that could be a derivative of one of our training courses. I expect there will be a number of great topics covered by the product team, including the standard SDG session from John. 

 

Mark

 

On Mon, Mar 4, 2019 at 10:11 AM Jagdish Mirani <jmirani@pivotal.io <ma...@pivotal.io> > wrote:

Hello Apache Geode community:

Join us this October 7-10 for our fourth Geode Summit. As with prior years, the Geode Summit is being held in conjunction with the SpringOne Platform conference <https://springoneplatform.io/> , this time in Austin Texas on Oct 7-10, 2019.

 

This is a great opportunity to share your Geode knowledge, success stories, and best practices. It's also a great opportunity to learn from others. In prior years we've seen a tremendous amount of useful knowledge shared by the community (videos: 2018 <https://www.youtube.com/playlist?list=PL62pIycqXx-ShuJ1YpV2wlmNZodnqzY8T> , 2017 <https://www.youtube.com/playlist?list=PL62pIycqXx-QfmNrUmfKoTZXKU5K90JpK> , 2016 <https://www.youtube.com/playlist?list=PL62pIycqXx-RCEP2APj4Mq9Oeyvskj_0K> ). 

 

The call for papers <https://springoneplatform.io/2019/cfp>  is now open - just indicate Geode as the topic when you make your submission.

 

Interested in Attending?

Even if you're not presenting, it would be great if you could still attend.

 

As before, there will be a special contiguous half-day block of Geode sessions on the Monday of the conference (Monday Oct 7th, from 1-6PM), followed by a number of Geode sessions on Tues-Thurs of the same week. There are two ways to attend:

*	A full conference registration entitles you to attend any of the Geode sessions, including the Monday Oct 7th, half day Geode block. The full conference pass prices do go up over time, so it's important to register early. In addition to the early bird discount, you can use the following discount code for an additional $200 off the full conference pass: S1P200_JMirani.
*	We've added a reduced price Monday only option for those who only want to attend the Monday Geode sessions.

Here's the registration link <https://springoneplatform.io/register> .

 

We hope to see you in Austin this fall!

 

Regards

Jag




 

-- 

Mark Secrist | Director, Global Education Delivery

 <ma...@pivotal.io> msecrist@pivotal.io

970.214.4567 Mobile

 

   <http://d1fto35gcfffzn.cloudfront.net/images/header/logo-pivotal-220.png>   <http://www.pivotal.io/> pivotal.io

Follow Us:  <http://www.twitter.com/pivotal> Twitter |  <http://www.linkedin.com/company/pivotalsoftware> LinkedIn |  <http://www.facebook.com/pivotalsoftware> Facebook |  <http://www.youtube.com/gopivotal> YouTube |  <https://plus.google.com/105320112436428794490> Google+


Re: Submit your Geode Summit 2019 session proposals, register to attend

Posted by Jagdish Mirani <jm...@pivotal.io>.
Hey Mark:
The attendees typically have some working knowledge of Geode, so it would
have to be an advanced learning topic. Given the breadth of Geode, there
are many directions we could take with this.

Let me pose the question to the community at large. Folks, which specific
learning topics do you see as a good fit for the Summit - you can answer
this even if you're unable to attend.

Regards
Jag

On Wed, Mar 6, 2019 at 8:25 AM Mark Secrist <ms...@pivotal.io> wrote:

> Hey Jag - are there any particular learning topics that you would like to
> see? I can think of any number of items that could be a derivative of one
> of our training courses. I expect there will be a number of great topics
> covered by the product team, including the standard SDG session from John.
>
> Mark
>
> On Mon, Mar 4, 2019 at 10:11 AM Jagdish Mirani <jm...@pivotal.io> wrote:
>
> > Hello Apache Geode community:
> > Join us this October 7-10 for our fourth Geode Summit. As with prior
> > years, the Geode Summit is being held in conjunction with the SpringOne
> > Platform conference <https://springoneplatform.io/>, this time in Austin
> > Texas on Oct 7-10, 2019.
> >
> > This is a great opportunity to share your Geode knowledge, success
> > stories, and best practices. It's also a great opportunity to learn from
> > others. In prior years we've seen a tremendous amount of useful knowledge
> > shared by the community (videos: 2018
> > <
> https://www.youtube.com/playlist?list=PL62pIycqXx-ShuJ1YpV2wlmNZodnqzY8T>,
> > 2017
> > <
> https://www.youtube.com/playlist?list=PL62pIycqXx-QfmNrUmfKoTZXKU5K90JpK>,
> > 2016
> > <
> https://www.youtube.com/playlist?list=PL62pIycqXx-RCEP2APj4Mq9Oeyvskj_0K>
> > ).
> >
> > *The call for papers <https://springoneplatform.io/2019/cfp> is now
> open -
> > just indicate Geode as the topic when you make your submission.*
> >
> > *Interested in Attending?*
> > Even if you're not presenting, it would be great if you could still
> attend.
> >
> > As before, there will be a special contiguous half-day block of Geode
> > sessions on the Monday of the conference (Monday Oct 7th, from 1-6PM),
> > followed by a number of Geode sessions on Tues-Thurs of the same week.
> > There are two ways to attend:
> >
> >    - A full conference registration entitles you to attend any of the
> >    Geode sessions, including the Monday Oct 7th, half day Geode block.
> The
> >    full conference pass prices do go up over time, so it's important to
> >    register early. In addition to the early bird discount, you can use
> the
> >    following discount code for an additional $200 off the full conference
> >    pass: *S1P200_JMirani*.
> >
> >    - We've added a reduced price Monday only option for those who only
> >    want to attend the Monday Geode sessions.
> >
> > Here's the registration link <https://springoneplatform.io/register>.
> >
> > We hope to see you in Austin this fall!
> >
> > Regards
> > Jag
> >
>
>
> --
>
> *Mark Secrist | Director, **Global Education Delivery*
>
> msecrist@pivotal.io
>
> 970.214.4567 Mobile
>
>
>   *pivotal.io <http://www.pivotal.io/>*
>
> Follow Us: Twitter <http://www.twitter.com/pivotal> | LinkedIn
> <http://www.linkedin.com/company/pivotalsoftware> | Facebook
> <http://www.facebook.com/pivotalsoftware> | YouTube
> <http://www.youtube.com/gopivotal> | Google+
> <https://plus.google.com/105320112436428794490>
>


-- 
Regards
Jag

Re: Submit your Geode Summit 2019 session proposals, register to attend

Posted by Jagdish Mirani <jm...@pivotal.io>.
Hey Mark:
The attendees typically have some working knowledge of Geode, so it would
have to be an advanced learning topic. Given the breadth of Geode, there
are many directions we could take with this.

Let me pose the question to the community at large. Folks, which specific
learning topics do you see as a good fit for the Summit - you can answer
this even if you're unable to attend.

Regards
Jag

On Wed, Mar 6, 2019 at 8:25 AM Mark Secrist <ms...@pivotal.io> wrote:

> Hey Jag - are there any particular learning topics that you would like to
> see? I can think of any number of items that could be a derivative of one
> of our training courses. I expect there will be a number of great topics
> covered by the product team, including the standard SDG session from John.
>
> Mark
>
> On Mon, Mar 4, 2019 at 10:11 AM Jagdish Mirani <jm...@pivotal.io> wrote:
>
> > Hello Apache Geode community:
> > Join us this October 7-10 for our fourth Geode Summit. As with prior
> > years, the Geode Summit is being held in conjunction with the SpringOne
> > Platform conference <https://springoneplatform.io/>, this time in Austin
> > Texas on Oct 7-10, 2019.
> >
> > This is a great opportunity to share your Geode knowledge, success
> > stories, and best practices. It's also a great opportunity to learn from
> > others. In prior years we've seen a tremendous amount of useful knowledge
> > shared by the community (videos: 2018
> > <
> https://www.youtube.com/playlist?list=PL62pIycqXx-ShuJ1YpV2wlmNZodnqzY8T>,
> > 2017
> > <
> https://www.youtube.com/playlist?list=PL62pIycqXx-QfmNrUmfKoTZXKU5K90JpK>,
> > 2016
> > <
> https://www.youtube.com/playlist?list=PL62pIycqXx-RCEP2APj4Mq9Oeyvskj_0K>
> > ).
> >
> > *The call for papers <https://springoneplatform.io/2019/cfp> is now
> open -
> > just indicate Geode as the topic when you make your submission.*
> >
> > *Interested in Attending?*
> > Even if you're not presenting, it would be great if you could still
> attend.
> >
> > As before, there will be a special contiguous half-day block of Geode
> > sessions on the Monday of the conference (Monday Oct 7th, from 1-6PM),
> > followed by a number of Geode sessions on Tues-Thurs of the same week.
> > There are two ways to attend:
> >
> >    - A full conference registration entitles you to attend any of the
> >    Geode sessions, including the Monday Oct 7th, half day Geode block.
> The
> >    full conference pass prices do go up over time, so it's important to
> >    register early. In addition to the early bird discount, you can use
> the
> >    following discount code for an additional $200 off the full conference
> >    pass: *S1P200_JMirani*.
> >
> >    - We've added a reduced price Monday only option for those who only
> >    want to attend the Monday Geode sessions.
> >
> > Here's the registration link <https://springoneplatform.io/register>.
> >
> > We hope to see you in Austin this fall!
> >
> > Regards
> > Jag
> >
>
>
> --
>
> *Mark Secrist | Director, **Global Education Delivery*
>
> msecrist@pivotal.io
>
> 970.214.4567 Mobile
>
>
>   *pivotal.io <http://www.pivotal.io/>*
>
> Follow Us: Twitter <http://www.twitter.com/pivotal> | LinkedIn
> <http://www.linkedin.com/company/pivotalsoftware> | Facebook
> <http://www.facebook.com/pivotalsoftware> | YouTube
> <http://www.youtube.com/gopivotal> | Google+
> <https://plus.google.com/105320112436428794490>
>


-- 
Regards
Jag

Re: Submit your Geode Summit 2019 session proposals, register to attend

Posted by Mark Secrist <ms...@pivotal.io>.
Hey Jag - are there any particular learning topics that you would like to
see? I can think of any number of items that could be a derivative of one
of our training courses. I expect there will be a number of great topics
covered by the product team, including the standard SDG session from John.

Mark

On Mon, Mar 4, 2019 at 10:11 AM Jagdish Mirani <jm...@pivotal.io> wrote:

> Hello Apache Geode community:
> Join us this October 7-10 for our fourth Geode Summit. As with prior
> years, the Geode Summit is being held in conjunction with the SpringOne
> Platform conference <https://springoneplatform.io/>, this time in Austin
> Texas on Oct 7-10, 2019.
>
> This is a great opportunity to share your Geode knowledge, success
> stories, and best practices. It's also a great opportunity to learn from
> others. In prior years we've seen a tremendous amount of useful knowledge
> shared by the community (videos: 2018
> <https://www.youtube.com/playlist?list=PL62pIycqXx-ShuJ1YpV2wlmNZodnqzY8T>,
> 2017
> <https://www.youtube.com/playlist?list=PL62pIycqXx-QfmNrUmfKoTZXKU5K90JpK>,
> 2016
> <https://www.youtube.com/playlist?list=PL62pIycqXx-RCEP2APj4Mq9Oeyvskj_0K>
> ).
>
> *The call for papers <https://springoneplatform.io/2019/cfp> is now open -
> just indicate Geode as the topic when you make your submission.*
>
> *Interested in Attending?*
> Even if you're not presenting, it would be great if you could still attend.
>
> As before, there will be a special contiguous half-day block of Geode
> sessions on the Monday of the conference (Monday Oct 7th, from 1-6PM),
> followed by a number of Geode sessions on Tues-Thurs of the same week.
> There are two ways to attend:
>
>    - A full conference registration entitles you to attend any of the
>    Geode sessions, including the Monday Oct 7th, half day Geode block. The
>    full conference pass prices do go up over time, so it's important to
>    register early. In addition to the early bird discount, you can use the
>    following discount code for an additional $200 off the full conference
>    pass: *S1P200_JMirani*.
>
>    - We've added a reduced price Monday only option for those who only
>    want to attend the Monday Geode sessions.
>
> Here's the registration link <https://springoneplatform.io/register>.
>
> We hope to see you in Austin this fall!
>
> Regards
> Jag
>


-- 

*Mark Secrist | Director, **Global Education Delivery*

msecrist@pivotal.io

970.214.4567 Mobile


  *pivotal.io <http://www.pivotal.io/>*

Follow Us: Twitter <http://www.twitter.com/pivotal> | LinkedIn
<http://www.linkedin.com/company/pivotalsoftware> | Facebook
<http://www.facebook.com/pivotalsoftware> | YouTube
<http://www.youtube.com/gopivotal> | Google+
<https://plus.google.com/105320112436428794490>

Re: Submit your Geode Summit 2019 session proposals, register to attend

Posted by Mark Secrist <ms...@pivotal.io>.
Hey Jag - are there any particular learning topics that you would like to
see? I can think of any number of items that could be a derivative of one
of our training courses. I expect there will be a number of great topics
covered by the product team, including the standard SDG session from John.

Mark

On Mon, Mar 4, 2019 at 10:11 AM Jagdish Mirani <jm...@pivotal.io> wrote:

> Hello Apache Geode community:
> Join us this October 7-10 for our fourth Geode Summit. As with prior
> years, the Geode Summit is being held in conjunction with the SpringOne
> Platform conference <https://springoneplatform.io/>, this time in Austin
> Texas on Oct 7-10, 2019.
>
> This is a great opportunity to share your Geode knowledge, success
> stories, and best practices. It's also a great opportunity to learn from
> others. In prior years we've seen a tremendous amount of useful knowledge
> shared by the community (videos: 2018
> <https://www.youtube.com/playlist?list=PL62pIycqXx-ShuJ1YpV2wlmNZodnqzY8T>,
> 2017
> <https://www.youtube.com/playlist?list=PL62pIycqXx-QfmNrUmfKoTZXKU5K90JpK>,
> 2016
> <https://www.youtube.com/playlist?list=PL62pIycqXx-RCEP2APj4Mq9Oeyvskj_0K>
> ).
>
> *The call for papers <https://springoneplatform.io/2019/cfp> is now open -
> just indicate Geode as the topic when you make your submission.*
>
> *Interested in Attending?*
> Even if you're not presenting, it would be great if you could still attend.
>
> As before, there will be a special contiguous half-day block of Geode
> sessions on the Monday of the conference (Monday Oct 7th, from 1-6PM),
> followed by a number of Geode sessions on Tues-Thurs of the same week.
> There are two ways to attend:
>
>    - A full conference registration entitles you to attend any of the
>    Geode sessions, including the Monday Oct 7th, half day Geode block. The
>    full conference pass prices do go up over time, so it's important to
>    register early. In addition to the early bird discount, you can use the
>    following discount code for an additional $200 off the full conference
>    pass: *S1P200_JMirani*.
>
>    - We've added a reduced price Monday only option for those who only
>    want to attend the Monday Geode sessions.
>
> Here's the registration link <https://springoneplatform.io/register>.
>
> We hope to see you in Austin this fall!
>
> Regards
> Jag
>


-- 

*Mark Secrist | Director, **Global Education Delivery*

msecrist@pivotal.io

970.214.4567 Mobile


  *pivotal.io <http://www.pivotal.io/>*

Follow Us: Twitter <http://www.twitter.com/pivotal> | LinkedIn
<http://www.linkedin.com/company/pivotalsoftware> | Facebook
<http://www.facebook.com/pivotalsoftware> | YouTube
<http://www.youtube.com/gopivotal> | Google+
<https://plus.google.com/105320112436428794490>

Re: Submit your Geode Summit 2019 session proposals, register to attend

Posted by Anthony Baker <ab...@apache.org>.
Sounds like a great event and I encourage you to a) submit a talk and b)
attend.

I also want to note that ApacheCon will be in Las Vegas this year [1] and
it would be great to see some Geode talks at that event as well.

Anthony

[1] https://www.apachecon.com/acna19/index.html



On Mar 4, 2019, at 9:11 AM, Jagdish Mirani <jm...@pivotal.io> wrote:

Hello Apache Geode community:
Join us this October 7-10 for our fourth Geode Summit. As with prior years,
the Geode Summit is being held in conjunction with the SpringOne Platform
conference <https://springoneplatform.io/>, this time in Austin Texas on
Oct 7-10, 2019.

This is a great opportunity to share your Geode knowledge, success stories,
and best practices. It's also a great opportunity to learn from others. In
prior years we've seen a tremendous amount of useful knowledge shared by
the community (videos: 2018
<https://www.youtube.com/playlist?list=PL62pIycqXx-ShuJ1YpV2wlmNZodnqzY8T>,
2017
<https://www.youtube.com/playlist?list=PL62pIycqXx-QfmNrUmfKoTZXKU5K90JpK>,
2016
<https://www.youtube.com/playlist?list=PL62pIycqXx-RCEP2APj4Mq9Oeyvskj_0K>
).

*The call for papers <https://springoneplatform.io/2019/cfp> is now open -
just indicate Geode as the topic when you make your submission.*

*Interested in Attending?*
Even if you're not presenting, it would be great if you could still attend.

As before, there will be a special contiguous half-day block of Geode
sessions on the Monday of the conference (Monday Oct 7th, from 1-6PM),
followed by a number of Geode sessions on Tues-Thurs of the same week.
There are two ways to attend:

   - A full conference registration entitles you to attend any of the Geode
   sessions, including the Monday Oct 7th, half day Geode block. The full
   conference pass prices do go up over time, so it's important to register
   early. In addition to the early bird discount, you can use the following
   discount code for an additional $200 off the full conference pass:
   *S1P200_JMirani*.

   - We've added a reduced price Monday only option for those who only want
   to attend the Monday Geode sessions.

Here's the registration link <https://springoneplatform.io/register>.

We hope to see you in Austin this fall!

Regards
Jag