You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@trafficserver.apache.org by Miles Libbey <ml...@apache.org> on 2009/11/24 19:20:03 UTC

document translation infrastructure?

Hi folks-
We have a volunteer to translate our documentation from English into 
Korean.  Any recommendations for translation management/infrastructure? 
That is-- as the english documentation changes, is there any software 
that can help to find out of date or new strings/sections?

thanks,
miles libbey

Re: document translation infrastructure?

Posted by Niclas Hedhman <ni...@hedhman.org>.
On Wed, Nov 25, 2009 at 2:20 AM, Miles Libbey <ml...@apache.org> wrote:
> Hi folks-
> We have a volunteer to translate our documentation from English into Korean.
>  Any recommendations for translation management/infrastructure? That is-- as
> the english documentation changes, is there any software that can help to
> find out of date or new strings/sections?

Wouldn't this first of all depend a lot on the documentation source?

If you are using XML based source system, then shouldn't the xml:lang
namespace attribute feature be able to assist?



Cheers
-- 
Niclas Hedhman, Software Developer
http://www.qi4j.org - New Energy for Java

I  live here; http://tinyurl.com/2qq9er
I  work here; http://tinyurl.com/2ymelc
I relax here; http://tinyurl.com/2cgsug

Re: C++ version of InkAPI ?

Posted by Bryan Call <bc...@yahoo-inc.com>.
Personally I like C APIs, they seem to be easier to maintain.  I was once force to implement an API using pure virtuals and it was hell dealing with binary compatibility breaking all the time (well hell for the people using the API :)).

Yes, the internal objects can be whatever you want.  That is the beauty of the pimpl.  I don't know of any automated tools, but I don't think it would be hard to write one.  Most of the time all you are doing is calling the same method call on a pointer to the real implementation.

-Bryan

On Nov 25, 2009, at 2:12 PM, John Plevyak wrote:

> 
> Good point.  So non-virtual functions which would imply wrapper objects
> for internal objects with virtual functions. I wonder if there are any tools to automate that.
> Sigh.  This is getting complicated.  C++ is a pain.  Makes me think the C API isn't so bad.
> 
> john
> 
> Bryan Call wrote:
>> Have pure virtuals ties us into not being able to expand the API for the class without breaking binary compatibility.  I would rather see a private implementation design for the API instead.  That way if we want to expand the API to add functionality it won't break binary compatibility.
>> 
>> -Bryan
>> 
>> On 11/25/2009 11:34 AM, John Plevyak wrote:
>>> 
>>> Given that the existing C API is often a wrapper on an internal C++ API, there is a natural
>>> mapping of just using the existing internal C++ APIs as the InkAPI but just
>>> convert them to be pure virtual.   This would also encourage clean internal
>>> APIs to make it easier to use for the InkAPI.   I am not sure about the whole setter/getter
>>> vs exposing some stable data structures vs smarter higher level interfaces, but perhaps
>>> these are better considered on a case by case basis.
>>> 
>>> Doing internal C++ to external C then wrapping the external C with external C++ is going
>>> to result in a lot of code to be maintained unless we can automate it in some way.  But
>>> if it is automated then perhaps we can automate a native extern C++ interface as well.
>>> But if it is automated then why not just use SWIG to do the automation and why not just
>>> use SWIG to automate the external C from the C++ which we have an internal version of already?
>>> 
>>> Anyway that was the train of logic I went down.. of course it becomes more tenuous the
>>> farther you follow it, but it is still interesting to consider.
>>> 
>>> john
>>> 
>>> 
>>> Leif Hedstrom wrote:
>>>> On 11/25/2009 11:43 AM, John Plevyak wrote:
>>>>> 
>>>>> 
>>>>> << transfer from IRC>>
>>>>> 
>>>>> Here is a proposal:
>>>>> 
>>>>> 1. C++ APIs
>>>>> 2. Clean SWIG for supporting other language
>>>>>    (in other words, the C++ APIs would have to work well with SWIG)
>>>>> 
>>>>> Open question: do we expose some very stable data structures, e.g. IOBuffer, VIO ?
>>>> 
>>>> 
>>>> So, what exactly would change in the APIs? Are we going all out OO, and everything becomes class methods, getters/setters etc.?  If we do this, we should do an equally drastic redesign of the Remap APIs (and no more struct passing etc.).
>>>> 
>>>> As an alternative, how much worse would it be to have a low level C API (that we expose) like today, and then make a higher level object oriented API that wraps the C APIs?
>>>> 
>>>> For sure, now is the time to do any major changes like this :).
>>>> 
>>>> -- Leif
>>> 
>> 
> 



Re: C++ version of InkAPI ?

Posted by John Plevyak <jp...@acm.org>.
I have to agree that there are definitely bigger fish to fry.  High on 
my list
is making range requests and large files fast which means changing the 
cache structure
a bit which should be done before the release.  I'd rather concentrate 
on that
and just try to ensure that the existing C API is as clean as possible.  
We can
always prototype a C++ API on top of the C API and decide which one is the
"base" later... like 3.0.

How about an experimental C++ API on top of the C API in the main tree
but in a "contrib" directory so it is not officially supported but it is 
there for
folks who are capable of vetting the code themselves.

john

Belmon, Stephane wrote:
> Ditto. Can any C++ zealots please stand up? ;-) 
>
> -----Original Message-----
> From: Leif Hedstrom [mailto:zwoop@apache.org] 
> Sent: Wednesday, November 25, 2009 3:53 PM
> To: trafficserver-dev@incubator.apache.org
> Subject: Re: C++ version of InkAPI ?
>
> On 11/25/2009 04:32 PM, Leif Hedstrom wrote:
>   
>> I'd suggest we come up with a list of the requirements for what needs 
>> to be done in the "2.x" branch, and then freeze it. I'd be hard to 
>> convince that addressing the issues listed above is not one of those 
>> requirements :).
>>     
>
> I should say, changing APIs to be C++ is definitely not on my personal 
> "requirements" list, but I'm open to the ideas. You know me, I'm not 
> much of a C++ "biggot" in any way (me and templates don't go well 
> together). If we decide that changing APIs to C++ can wait until a 
> (much) later 3.0 release, than I'm totally fine with that (or, even 
> canning the idea of C++ APIs as well, I just don't really feel strongly 
> either way). There might be bigger fishes to fry honestly, like you say, 
> getting a stable release is definitely one of them (but, I still think 
> we need to examine the existing APIs, and fix that which is missing or 
> outright broken).
>
> Cheers,
>
> -- leif
>
>
>
>  Protected by Websense Hosted Email Security -- www.websense.com 
>   


RE: C++ version of InkAPI ?

Posted by "Belmon, Stephane" <sb...@websense.com>.
Ditto. Can any C++ zealots please stand up? ;-) 

-----Original Message-----
From: Leif Hedstrom [mailto:zwoop@apache.org] 
Sent: Wednesday, November 25, 2009 3:53 PM
To: trafficserver-dev@incubator.apache.org
Subject: Re: C++ version of InkAPI ?

On 11/25/2009 04:32 PM, Leif Hedstrom wrote:
>
>
> I'd suggest we come up with a list of the requirements for what needs 
> to be done in the "2.x" branch, and then freeze it. I'd be hard to 
> convince that addressing the issues listed above is not one of those 
> requirements :).

I should say, changing APIs to be C++ is definitely not on my personal 
"requirements" list, but I'm open to the ideas. You know me, I'm not 
much of a C++ "biggot" in any way (me and templates don't go well 
together). If we decide that changing APIs to C++ can wait until a 
(much) later 3.0 release, than I'm totally fine with that (or, even 
canning the idea of C++ APIs as well, I just don't really feel strongly 
either way). There might be bigger fishes to fry honestly, like you say, 
getting a stable release is definitely one of them (but, I still think 
we need to examine the existing APIs, and fix that which is missing or 
outright broken).

Cheers,

-- leif



 Protected by Websense Hosted Email Security -- www.websense.com 

Re: C++ version of InkAPI ?

Posted by Leif Hedstrom <zw...@apache.org>.
On 11/25/2009 04:32 PM, Leif Hedstrom wrote:
>
>
> I'd suggest we come up with a list of the requirements for what needs 
> to be done in the "2.x" branch, and then freeze it. I'd be hard to 
> convince that addressing the issues listed above is not one of those 
> requirements :).

I should say, changing APIs to be C++ is definitely not on my personal 
"requirements" list, but I'm open to the ideas. You know me, I'm not 
much of a C++ "biggot" in any way (me and templates don't go well 
together). If we decide that changing APIs to C++ can wait until a 
(much) later 3.0 release, than I'm totally fine with that (or, even 
canning the idea of C++ APIs as well, I just don't really feel strongly 
either way). There might be bigger fishes to fry honestly, like you say, 
getting a stable release is definitely one of them (but, I still think 
we need to examine the existing APIs, and fix that which is missing or 
outright broken).

Cheers,

-- leif


Re: C++ version of InkAPI ?

Posted by Leif Hedstrom <zw...@apache.org>.
On 11/25/2009 08:34 PM, Bryan Call wrote:
>
> I am talking about internal files created by traffic server, cache and hostdb being a couple of them.  For someone with a large setup they would roll out software to a few servers at a time.  Most caches have a Zipfian distribution and would have a working set that would populate the cache to a stable cache hit ratio is a few hours, this has been my experience with very large databases.
>    

I think I disagree with the cache not being "important" to preserve, 
most people probably won't run as massive deployments as Y!. But, this 
is obviously speculation only, the only "metrics" we have from use cases 
comes from internal use.

> I
> Yes, you just proved my point.  There are few people (not many) that have large caches.  You can add Search Crawler and Flickr (that doesn't use TS) to the list.  This is a minority of the users and a minority of the traffic.  Few groups are asking for larger caches.
>    

Well, my point isn't that > 512GB caches is the normal case, it was that 
there are known use cases internally at Yahoo. I can't speculate on how 
people outside Y! would use TS, I don't think anyone knows.


> The list that you have above are reasons groups will be less likely to 
> move to the Apache branch. People will have to modify their plugins 
> and wipe their caches. According to you this is "very distruptive".

Yes, it'll be disruptive, once, and for Y! engineering only. Something 
we're willing to deal with (once) as part of this OpenSource effort. I 
think it'd be much, much more disruptive breaking compatibilities once 
we've made an official ASF release. I'm assuming (and hoping) that we'll 
have hundreds if not thousands of customers relying on Traffic Server 
once we make an official release. Affecting them in a "disruptive" way 
when upgrading (2.2, 2.4 etc.) seems much more harmful than letting Y! 
taking the one time hit.

> You haven't addressed anything about stability and how we are going to test all the changes.  There have been a lot of changes to the Apache tree that haven't been fully tested.  Also, there have been a lot of changes that have happened and are happening to the internal branches that haven't made it to the Apache branch yet.
>    

My assumption has been that there are enough serious issues with those 
items that I've pointed out, that we need to fix those. If that is not 
the case, then sure, lets freeze the APIs / ABIs / cache layouts now, 
and focus on stability. But, then we shouldn't break this until a "3.0" 
release, IMO at least.

This assumption of mine is based on two things that I am experienced 
with; 1) The cache plugin APIs have major performance problems, and most 
likely needs to get a major overhaul. Seeing it's incremental 
development during 1.17 (Y! internal), it was breaking ABIs often, 
causing major problems with deployments (it's been a mess). 2) The Remap 
APIs are majorly horky IMO. At a minimum, we need to fix the things that 
are missing and broken (I'm fairly certain that the chaining of remap 
plugins is completely wrong for example).

John's comments and bug reports regarding the disk cache makes me 
believe that it's a worthwhile change to make now, and doing so will 
avoid breaking after the initial release.


All of these are up for discussion, I obviously shouldn't go out and 
call them "personal requirements", that was bad wording on my part. I 
should have said "personal wish list".

> This reminds me of when we tried to (or are we still trying to) stabilize the 1.17 branch and there was a lot of push to add in features (SRV, string_get_ref, redirect, etc).  These features created instability and were not properly tested.  I don't think anyone wants to go down that road again...
>    

Yes, that is a valid point. I think the main problem with the 1.17 
release was that we kept adding more and more features to it, and never 
letting it stabilize between such additions. So, lets decide exactly 
what should and should not go into the first ASF release (which I 
believe we'll call "2.0"). If fixing any of the things on my personal 
preference list doesn't get addressed, so be it. I still believe that 
stabilizing APIs / ABIs now would make it a more stable platform for 
people to build on, but it's entirely possible I'm completely wrong.

Lets start a Confluence Wiki page with the proposals for what should be 
candidates for going into a "2.0" release, and then vote on each one of 
them. I think that's the HTTPD way, right Paul?  I've created this page, 
please help out and update it with details and other ideas:

     http://cwiki.apache.org/confluence/display/TS/Release-2.0


I'm not adding sections on obvious things here, like proper startup 
scripts, traffic_manager actually working etc. Things that are outright 
broken, which used to work, we clearly have to fix to call it "stable" :).


Cheers,

-- leif


Re: C++ version of InkAPI ?

Posted by Bryan Call <bc...@yahoo-inc.com>.
On Nov 25, 2009, at 3:32 PM, Leif Hedstrom wrote:

> On 11/25/2009 03:51 PM, Bryan Call wrote:
>> We also need to worry about stability too.  We have a lot of changes in the tree that are untested and we don't have any automated testing to verify all the changes we have already made to the Apache tree don't have any hidden gems (bugs).  There has to be a balance with breaking compatibility moving forward and having a stable code base.
>> 
>> APIs are something that should be very stable since it requires real work (human time) to make the change.  Files that can be blown away and recreated over time should be considered less important.  I don't think we need to have a major version number bump for files, but we do for APIs breaking.
>>   
> 
> What do you mean "files"? Like the hostdb? Then yes, that's not a huge deal. Making changes to the cache could be very distruptive. It could be prohibitively expensive for someone to have to blow away TBs of cache just for doing a minor Traffic Server upgrade (I certainly would think twice before doing that myself).
> 

I am talking about internal files created by traffic server, cache and hostdb being a couple of them.  For someone with a large setup they would roll out software to a few servers at a time.  Most caches have a Zipfian distribution and would have a working set that would populate the cache to a stable cache hit ratio is a few hours, this has been my experience with very large databases.

If we feel it is a major disruption then we should do a major version bump for cache incompatibility or make changes in a way that offers backwards compatibility through a configuration setting.

>> I haven't seen many requests for caches over 512GB per server.  I think it is important to have this change, more for partitions sizes and the potential to reduce the number of partitions (reduces the seeking for writes).
>>   
> 
> Hmmm, internally at Y!, there is at least one very large deployment with 2.5TB of cache per machine (Wretch). I don't know what the "outside" world would do, but 512GB is not a huge cache, particularly in a forward proxy setup.
> 

Yes, you just proved my point.  There are few people (not many) that have large caches.  You can add Search Crawler and Flickr (that doesn't use TS) to the list.  This is a minority of the users and a minority of the traffic.  Few groups are asking for larger caches.

>> For the first release I would like to see less changes so we can move over to it quicker within Yahoo!.  If we try to push to much into the release we are going to have a harder time moving over to the Apache tree.
>>   
> 
> That's a "management" decision we have to make. I'd rather break things now that are difficult to break later, that would include:
> 
>    * APIs (InkAPI, Remap API, any CLI APIs), in particular newer APIs like the Cache plugin APIs needs to be finalized.
>    * ABIs (for above)
>    * Disk cache (RAM cache doesn't matter, since it's volatile)
>    * Any changes related to complete the 64-bit support (e.g. objects > 2GB), possibly related to APIs.
> 
> 
> I'd also argue that if we don't make sure that upgrading within the 2.x "branch" can be done internally at Y! without major disturbance, we'll never get the ASF version successfully used internally.  This is one thing we've been really good about at Y! so far, there's been very few changes that would cause an upgrade to be disruptive for our users.
> 

The list that you have above are reasons groups will be less likely to move to the Apache branch.  People will have to modify their plugins and wipe their caches.  According to you this is "very distruptive".


> I'd suggest we come up with a list of the requirements for what needs to be done in the "2.x" branch, and then freeze it. I'd be hard to convince that addressing the issues listed above is not one of those requirements :).

You haven't addressed anything about stability and how we are going to test all the changes.  There have been a lot of changes to the Apache tree that haven't been fully tested.  Also, there have been a lot of changes that have happened and are happening to the internal branches that haven't made it to the Apache branch yet.

This reminds me of when we tried to (or are we still trying to) stabilize the 1.17 branch and there was a lot of push to add in features (SRV, string_get_ref, redirect, etc).  These features created instability and were not properly tested.  I don't think anyone wants to go down that road again...

-Bryan


Re: C++ version of InkAPI ?

Posted by Leif Hedstrom <zw...@apache.org>.
On 11/25/2009 03:51 PM, Bryan Call wrote:
> We also need to worry about stability too.  We have a lot of changes in the tree that are untested and we don't have any automated testing to verify all the changes we have already made to the Apache tree don't have any hidden gems (bugs).  There has to be a balance with breaking compatibility moving forward and having a stable code base.
>
> APIs are something that should be very stable since it requires real work (human time) to make the change.  Files that can be blown away and recreated over time should be considered less important.  I don't think we need to have a major version number bump for files, but we do for APIs breaking.
>    

What do you mean "files"? Like the hostdb? Then yes, that's not a huge 
deal. Making changes to the cache could be very distruptive. It could be 
prohibitively expensive for someone to have to blow away TBs of cache 
just for doing a minor Traffic Server upgrade (I certainly would think 
twice before doing that myself).

> I haven't seen many requests for caches over 512GB per server.  I think it is important to have this change, more for partitions sizes and the potential to reduce the number of partitions (reduces the seeking for writes).
>    

Hmmm, internally at Y!, there is at least one very large deployment with 
2.5TB of cache per machine (Wretch). I don't know what the "outside" 
world would do, but 512GB is not a huge cache, particularly in a forward 
proxy setup.

> For the first release I would like to see less changes so we can move over to it quicker within Yahoo!.  If we try to push to much into the release we are going to have a harder time moving over to the Apache tree.
>    

That's a "management" decision we have to make. I'd rather break things 
now that are difficult to break later, that would include:

     * APIs (InkAPI, Remap API, any CLI APIs), in particular newer APIs 
like the Cache plugin APIs needs to be finalized.
     * ABIs (for above)
     * Disk cache (RAM cache doesn't matter, since it's volatile)
     * Any changes related to complete the 64-bit support (e.g. objects 
 > 2GB), possibly related to APIs.


I'd also argue that if we don't make sure that upgrading within the 2.x 
"branch" can be done internally at Y! without major disturbance, we'll 
never get the ASF version successfully used internally.  This is one 
thing we've been really good about at Y! so far, there's been very few 
changes that would cause an upgrade to be disruptive for our users.

I'd suggest we come up with a list of the requirements for what needs to 
be done in the "2.x" branch, and then freeze it. I'd be hard to convince 
that addressing the issues listed above is not one of those requirements :).

Cheers,

-- Leif


Re: C++ version of InkAPI ?

Posted by Bryan Call <bc...@yahoo-inc.com>.
We also need to worry about stability too.  We have a lot of changes in the tree that are untested and we don't have any automated testing to verify all the changes we have already made to the Apache tree don't have any hidden gems (bugs).  There has to be a balance with breaking compatibility moving forward and having a stable code base.

APIs are something that should be very stable since it requires real work (human time) to make the change.  Files that can be blown away and recreated over time should be considered less important.  I don't think we need to have a major version number bump for files, but we do for APIs breaking.

I haven't seen many requests for caches over 512GB per server.  I think it is important to have this change, more for partitions sizes and the potential to reduce the number of partitions (reduces the seeking for writes).

For the first release I would like to see less changes so we can move over to it quicker within Yahoo!.  If we try to push to much into the release we are going to have a harder time moving over to the Apache tree.

-Bryan

On Nov 25, 2009, at 2:25 PM, Leif Hedstrom wrote:

> On 11/25/2009 03:12 PM, John Plevyak wrote:
>> 
>> Good point.  So non-virtual functions which would imply wrapper objects
>> for internal objects with virtual functions. I wonder if there are any tools to automate that.
>> Sigh.  This is getting complicated.  C++ is a pain.  Makes me think the C API isn't so bad.
> 
> 
> An important goal for our first ASF release ("v2.0") needs to be to freeze anything in APIs and ABIs that would otherwise break backward compatibility. Within the v2 release cycles, we can not break APIs or ABIs (IMO at least). Since we have no compatibility issues to deal with right now (since there has been no release yet :), now is the time to figure out as much as we can for changes that could break compatibility.
> 
> This should include changes in the core too, like the cache dirent changes John is doing (now is definitely the time to do that, so people don't have to worry about nuking their caches when upgrading to a newer TS relase).
> 
> So, lets keep the discussions open. I'd urge that we start Confluence Wiki pages for all major code changes that would change internals like this. I'll start one for the Remap plugin APIs once I get the time to work on that (hopefully in a week).
> 
> Cheers,
> 
> -- Leif
> 



Re: C++ version of InkAPI ?

Posted by Leif Hedstrom <zw...@apache.org>.
On 11/25/2009 03:12 PM, John Plevyak wrote:
>
> Good point.  So non-virtual functions which would imply wrapper objects
> for internal objects with virtual functions. I wonder if there are any 
> tools to automate that.
> Sigh.  This is getting complicated.  C++ is a pain.  Makes me think 
> the C API isn't so bad.


An important goal for our first ASF release ("v2.0") needs to be to 
freeze anything in APIs and ABIs that would otherwise break backward 
compatibility. Within the v2 release cycles, we can not break APIs or 
ABIs (IMO at least). Since we have no compatibility issues to deal with 
right now (since there has been no release yet :), now is the time to 
figure out as much as we can for changes that could break compatibility.

This should include changes in the core too, like the cache dirent 
changes John is doing (now is definitely the time to do that, so people 
don't have to worry about nuking their caches when upgrading to a newer 
TS relase).

So, lets keep the discussions open. I'd urge that we start Confluence 
Wiki pages for all major code changes that would change internals like 
this. I'll start one for the Remap plugin APIs once I get the time to 
work on that (hopefully in a week).

Cheers,

-- Leif


Re: C++ version of InkAPI ?

Posted by John Plevyak <jp...@acm.org>.
Good point.  So non-virtual functions which would imply wrapper objects
for internal objects with virtual functions. I wonder if there are any 
tools to automate that.
Sigh.  This is getting complicated.  C++ is a pain.  Makes me think the 
C API isn't so bad.

john

Bryan Call wrote:
> Have pure virtuals ties us into not being able to expand the API for 
> the class without breaking binary compatibility.  I would rather see a 
> private implementation design for the API instead.  That way if we 
> want to expand the API to add functionality it won't break binary 
> compatibility.
>
> -Bryan
>
> On 11/25/2009 11:34 AM, John Plevyak wrote:
>>
>> Given that the existing C API is often a wrapper on an internal C++ 
>> API, there is a natural
>> mapping of just using the existing internal C++ APIs as the InkAPI 
>> but just
>> convert them to be pure virtual.   This would also encourage clean 
>> internal
>> APIs to make it easier to use for the InkAPI.   I am not sure about 
>> the whole setter/getter
>> vs exposing some stable data structures vs smarter higher level 
>> interfaces, but perhaps
>> these are better considered on a case by case basis.
>>
>> Doing internal C++ to external C then wrapping the external C with 
>> external C++ is going
>> to result in a lot of code to be maintained unless we can automate it 
>> in some way.  But
>> if it is automated then perhaps we can automate a native extern C++ 
>> interface as well.
>> But if it is automated then why not just use SWIG to do the 
>> automation and why not just
>> use SWIG to automate the external C from the C++ which we have an 
>> internal version of already?
>>
>> Anyway that was the train of logic I went down.. of course it becomes 
>> more tenuous the
>> farther you follow it, but it is still interesting to consider.
>>
>> john
>>
>>
>> Leif Hedstrom wrote:
>>> On 11/25/2009 11:43 AM, John Plevyak wrote:
>>>>
>>>>
>>>> << transfer from IRC>>
>>>>
>>>> Here is a proposal:
>>>>
>>>> 1. C++ APIs
>>>> 2. Clean SWIG for supporting other language
>>>>     (in other words, the C++ APIs would have to work well with SWIG)
>>>>
>>>> Open question: do we expose some very stable data structures, e.g. 
>>>> IOBuffer, VIO ?
>>>
>>>
>>> So, what exactly would change in the APIs? Are we going all out OO, 
>>> and everything becomes class methods, getters/setters etc.?  If we 
>>> do this, we should do an equally drastic redesign of the Remap APIs 
>>> (and no more struct passing etc.).
>>>
>>> As an alternative, how much worse would it be to have a low level C 
>>> API (that we expose) like today, and then make a higher level object 
>>> oriented API that wraps the C APIs?
>>>
>>> For sure, now is the time to do any major changes like this :).
>>>
>>> -- Leif
>>
>


Re: C++ version of InkAPI ?

Posted by Bryan Call <bc...@yahoo-inc.com>.
Have pure virtuals ties us into not being able to expand the API for the 
class without breaking binary compatibility.  I would rather see a 
private implementation design for the API instead.  That way if we want 
to expand the API to add functionality it won't break binary compatibility.

-Bryan

On 11/25/2009 11:34 AM, John Plevyak wrote:
>
> Given that the existing C API is often a wrapper on an internal C++ 
> API, there is a natural
> mapping of just using the existing internal C++ APIs as the InkAPI but 
> just
> convert them to be pure virtual.   This would also encourage clean 
> internal
> APIs to make it easier to use for the InkAPI.   I am not sure about 
> the whole setter/getter
> vs exposing some stable data structures vs smarter higher level 
> interfaces, but perhaps
> these are better considered on a case by case basis.
>
> Doing internal C++ to external C then wrapping the external C with 
> external C++ is going
> to result in a lot of code to be maintained unless we can automate it 
> in some way.  But
> if it is automated then perhaps we can automate a native extern C++ 
> interface as well.
> But if it is automated then why not just use SWIG to do the automation 
> and why not just
> use SWIG to automate the external C from the C++ which we have an 
> internal version of already?
>
> Anyway that was the train of logic I went down.. of course it becomes 
> more tenuous the
> farther you follow it, but it is still interesting to consider.
>
> john
>
>
> Leif Hedstrom wrote:
>> On 11/25/2009 11:43 AM, John Plevyak wrote:
>>>
>>>
>>> << transfer from IRC>>
>>>
>>> Here is a proposal:
>>>
>>> 1. C++ APIs
>>> 2. Clean SWIG for supporting other language
>>>     (in other words, the C++ APIs would have to work well with SWIG)
>>>
>>> Open question: do we expose some very stable data structures, e.g. 
>>> IOBuffer, VIO ?
>>
>>
>> So, what exactly would change in the APIs? Are we going all out OO, 
>> and everything becomes class methods, getters/setters etc.?  If we do 
>> this, we should do an equally drastic redesign of the Remap APIs (and 
>> no more struct passing etc.).
>>
>> As an alternative, how much worse would it be to have a low level C 
>> API (that we expose) like today, and then make a higher level object 
>> oriented API that wraps the C APIs?
>>
>> For sure, now is the time to do any major changes like this :).
>>
>> -- Leif
>



Re: C++ version of InkAPI ?

Posted by John Plevyak <jp...@acm.org>.
I agree with everything you are saying except that I think there is 
value from
a simple C++ interface of just abstract classes with virtual functions and
perhaps a few exposed instance variables for very stable common objects.
The reason is that the internals are modeled that way and most plugins will
be written in a language which supports at least basic OO features and
can be mapped using an automatic tool like SWIG. 

Of course good copyable examples are critical.  One reason I am bringing
this up is that some of the best examples are builtin and written against
the internal C++ API which makes them harder to use as templates for
the external C API.

I agree that FULL C++ is a world of pain in terms of complexity and
compatibility and on balance not worth it.

john


Belmon, Stephane wrote:
> Maybe it depends what kind of plugins you're talking about.
>
> Simple header munging (a la output-header.c) is simple(-ish) 
> almost no matter what; boilerplate is boilerplate. Realistically, 
> people will just cut & paste from samples (thanks Chris).
>
> A really complex plugin -- say, analyze the payload as it comes
> along, using *remote* services to do so, creating a bunch of 
> new VIOs etc. -- is a completely different animal. Maybe
> it would benefit from a lot of help from the API. It's 
> arguable whether people would actually go down that path; if 
> the protocol to that remote service isn't dead simple, you'd really
> like to use an existing library to speak it, which is not going
> to play well with the TS infrastructure.
>
> It also really depends what is meant by "a C++ API". 
>
> C semantics with syntactic sugar isn't earth-shattering, but you 
> also don't get all that much from it (forgot to unlock/release/
> reenable before return? Tough.) Full-on C++, where API methods
> regularly throw, expect auto_ptrs and return STL containers... 
> that's a whole different beast. And so is one which provides 
> stdlib "equivalents" (a common style).
>
> Either way, documentation and good examples probably trump 
> architecture.
>
> It would be really interesting to see other languages (non-C/C++)
> supported, if only for one reason: there would have to be
> some "easier paths" (making performance tradeoffs), usable 
> from C/C++ as well.
>
>
> -----Original Message-----
> From: John Plevyak [mailto:jplevyak@acm.org] 
> Sent: Wednesday, November 25, 2009 11:34 AM
> To: trafficserver-dev@incubator.apache.org
> Subject: Re: C++ version of InkAPI ?
>
> Given that the existing C API is often a wrapper on an internal C++ API, 
> there is a natural
> mapping of just using the existing internal C++ APIs as the InkAPI but just
> convert them to be pure virtual.   This would also encourage clean internal
> APIs to make it easier to use for the InkAPI.   I am not sure about the 
> whole setter/getter
> vs exposing some stable data structures vs smarter higher level 
> interfaces, but perhaps
> these are better considered on a case by case basis.
>
> Doing internal C++ to external C then wrapping the external C with 
> external C++ is going
> to result in a lot of code to be maintained unless we can automate it in 
> some way.  But
> if it is automated then perhaps we can automate a native extern C++ 
> interface as well.
> But if it is automated then why not just use SWIG to do the automation 
> and why not just
> use SWIG to automate the external C from the C++ which we have an 
> internal version of already?
>
> Anyway that was the train of logic I went down.. of course it becomes 
> more tenuous the
> farther you follow it, but it is still interesting to consider.
>
> john
>
>
> Leif Hedstrom wrote:
>   
>> On 11/25/2009 11:43 AM, John Plevyak wrote:
>>     
>>> << transfer from IRC>>
>>>
>>> Here is a proposal:
>>>
>>> 1. C++ APIs
>>> 2. Clean SWIG for supporting other language
>>>     (in other words, the C++ APIs would have to work well with SWIG)
>>>
>>> Open question: do we expose some very stable data structures, e.g. 
>>> IOBuffer, VIO ?
>>>       
>> So, what exactly would change in the APIs? Are we going all out OO, 
>> and everything becomes class methods, getters/setters etc.?  If we do 
>> this, we should do an equally drastic redesign of the Remap APIs (and 
>> no more struct passing etc.).
>>
>> As an alternative, how much worse would it be to have a low level C 
>> API (that we expose) like today, and then make a higher level object 
>> oriented API that wraps the C APIs?
>>
>> For sure, now is the time to do any major changes like this :).
>>
>> -- Leif
>>     
>
>
>
>  Protected by Websense Hosted Email Security -- www.websense.com 
>   


RE: C++ version of InkAPI ?

Posted by "Belmon, Stephane" <sb...@websense.com>.
Maybe it depends what kind of plugins you're talking about.

Simple header munging (a la output-header.c) is simple(-ish) 
almost no matter what; boilerplate is boilerplate. Realistically, 
people will just cut & paste from samples (thanks Chris).

A really complex plugin -- say, analyze the payload as it comes
along, using *remote* services to do so, creating a bunch of 
new VIOs etc. -- is a completely different animal. Maybe
it would benefit from a lot of help from the API. It's 
arguable whether people would actually go down that path; if 
the protocol to that remote service isn't dead simple, you'd really
like to use an existing library to speak it, which is not going
to play well with the TS infrastructure.

It also really depends what is meant by "a C++ API". 

C semantics with syntactic sugar isn't earth-shattering, but you 
also don't get all that much from it (forgot to unlock/release/
reenable before return? Tough.) Full-on C++, where API methods
regularly throw, expect auto_ptrs and return STL containers... 
that's a whole different beast. And so is one which provides 
stdlib "equivalents" (a common style).

Either way, documentation and good examples probably trump 
architecture.

It would be really interesting to see other languages (non-C/C++)
supported, if only for one reason: there would have to be
some "easier paths" (making performance tradeoffs), usable 
from C/C++ as well.


-----Original Message-----
From: John Plevyak [mailto:jplevyak@acm.org] 
Sent: Wednesday, November 25, 2009 11:34 AM
To: trafficserver-dev@incubator.apache.org
Subject: Re: C++ version of InkAPI ?

Given that the existing C API is often a wrapper on an internal C++ API, 
there is a natural
mapping of just using the existing internal C++ APIs as the InkAPI but just
convert them to be pure virtual.   This would also encourage clean internal
APIs to make it easier to use for the InkAPI.   I am not sure about the 
whole setter/getter
vs exposing some stable data structures vs smarter higher level 
interfaces, but perhaps
these are better considered on a case by case basis.

Doing internal C++ to external C then wrapping the external C with 
external C++ is going
to result in a lot of code to be maintained unless we can automate it in 
some way.  But
if it is automated then perhaps we can automate a native extern C++ 
interface as well.
But if it is automated then why not just use SWIG to do the automation 
and why not just
use SWIG to automate the external C from the C++ which we have an 
internal version of already?

Anyway that was the train of logic I went down.. of course it becomes 
more tenuous the
farther you follow it, but it is still interesting to consider.

john


Leif Hedstrom wrote:
> On 11/25/2009 11:43 AM, John Plevyak wrote:
>>
>>
>> << transfer from IRC>>
>>
>> Here is a proposal:
>>
>> 1. C++ APIs
>> 2. Clean SWIG for supporting other language
>>     (in other words, the C++ APIs would have to work well with SWIG)
>>
>> Open question: do we expose some very stable data structures, e.g. 
>> IOBuffer, VIO ?
>
>
> So, what exactly would change in the APIs? Are we going all out OO, 
> and everything becomes class methods, getters/setters etc.?  If we do 
> this, we should do an equally drastic redesign of the Remap APIs (and 
> no more struct passing etc.).
>
> As an alternative, how much worse would it be to have a low level C 
> API (that we expose) like today, and then make a higher level object 
> oriented API that wraps the C APIs?
>
> For sure, now is the time to do any major changes like this :).
>
> -- Leif



 Protected by Websense Hosted Email Security -- www.websense.com 

Re: C++ version of InkAPI ?

Posted by John Plevyak <jp...@acm.org>.
Given that the existing C API is often a wrapper on an internal C++ API, 
there is a natural
mapping of just using the existing internal C++ APIs as the InkAPI but just
convert them to be pure virtual.   This would also encourage clean internal
APIs to make it easier to use for the InkAPI.   I am not sure about the 
whole setter/getter
vs exposing some stable data structures vs smarter higher level 
interfaces, but perhaps
these are better considered on a case by case basis.

Doing internal C++ to external C then wrapping the external C with 
external C++ is going
to result in a lot of code to be maintained unless we can automate it in 
some way.  But
if it is automated then perhaps we can automate a native extern C++ 
interface as well.
But if it is automated then why not just use SWIG to do the automation 
and why not just
use SWIG to automate the external C from the C++ which we have an 
internal version of already?

Anyway that was the train of logic I went down.. of course it becomes 
more tenuous the
farther you follow it, but it is still interesting to consider.

john


Leif Hedstrom wrote:
> On 11/25/2009 11:43 AM, John Plevyak wrote:
>>
>>
>> << transfer from IRC>>
>>
>> Here is a proposal:
>>
>> 1. C++ APIs
>> 2. Clean SWIG for supporting other language
>>     (in other words, the C++ APIs would have to work well with SWIG)
>>
>> Open question: do we expose some very stable data structures, e.g. 
>> IOBuffer, VIO ?
>
>
> So, what exactly would change in the APIs? Are we going all out OO, 
> and everything becomes class methods, getters/setters etc.?  If we do 
> this, we should do an equally drastic redesign of the Remap APIs (and 
> no more struct passing etc.).
>
> As an alternative, how much worse would it be to have a low level C 
> API (that we expose) like today, and then make a higher level object 
> oriented API that wraps the C APIs?
>
> For sure, now is the time to do any major changes like this :).
>
> -- Leif


Re: C++ version of InkAPI ?

Posted by Leif Hedstrom <zw...@apache.org>.
On 11/25/2009 11:43 AM, John Plevyak wrote:
>
>
> << transfer from IRC>>
>
> Here is a proposal:
>
> 1. C++ APIs
> 2. Clean SWIG for supporting other language
>     (in other words, the C++ APIs would have to work well with SWIG)
>
> Open question: do we expose some very stable data structures, e.g. 
> IOBuffer, VIO ?


So, what exactly would change in the APIs? Are we going all out OO, and 
everything becomes class methods, getters/setters etc.?  If we do this, 
we should do an equally drastic redesign of the Remap APIs (and no more 
struct passing etc.).

As an alternative, how much worse would it be to have a low level C API 
(that we expose) like today, and then make a higher level object 
oriented API that wraps the C APIs?

For sure, now is the time to do any major changes like this :).

-- Leif

Re: C++ version of InkAPI ?

Posted by John Plevyak <jp...@acm.org>.

<< transfer from IRC>>

Here is a proposal:

1. C++ APIs
2. Clean SWIG for supporting other language
     (in other words, the C++ APIs would have to work well with SWIG)

Open question: do we expose some very stable data structures, e.g. 
IOBuffer, VIO ?


John Plevyak wrote:
>
> Has any thought been given to a C++ version of the InkAPI?
>
> The current InkAPI includes "bool" so it doesn't even compile with C
> which implies to me that most folks are using C++ already.
>
> Dynamic loading with C++ is only a bit trickier and the resulting 
> interface
> would match the implementation and modern programming styles better.
>
> john


C++ version of InkAPI ?

Posted by John Plevyak <jp...@acm.org>.
Has any thought been given to a C++ version of the InkAPI?

The current InkAPI includes "bool" so it doesn't even compile with C
which implies to me that most folks are using C++ already.

Dynamic loading with C++ is only a bit trickier and the resulting interface
would match the implementation and modern programming styles better.

john

Re: document translation infrastructure?

Posted by Andrus Adamchik <an...@objectstyle.org>.
Many projects are using cwiki.apache.org/confluence to maintain  
documentation. Confluence has version tracking with visual diff  
display. Probably also possible to setup an RSS feed to watch the  
pages that changed. So that should be all you need.

Andrus

On Nov 24, 2009, at 8:20 PM, Miles Libbey wrote:

> Hi folks-
> We have a volunteer to translate our documentation from English into  
> Korean.  Any recommendations for translation management/ 
> infrastructure? That is-- as the english documentation changes, is  
> there any software that can help to find out of date or new strings/ 
> sections?
>
> thanks,
> miles libbey
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
> For additional commands, e-mail: general-help@incubator.apache.org
>
>


Re: document translation infrastructure?

Posted by André Malo <nd...@perlig.de>.
* Miles Libbey wrote: 


> Hi folks-
> I work in the Traffic Server incubator project, and have had our first
> request to translate documentation from English into Korean.
>
> I think I missing something in the process that Paul describes.  Sounds
> like:
> 1. Someone makes a change to the English docbook/xml file, and submits a
> patch.
> 2. The patch gets reviewed, and assuming high quality changes, gets
> committed

usually the people directly commit it. The English documentation has CTR 
policy (commit then review).

> 3. something happens in which all the xml.{language_code} files get a
> new "English Revision" comment [what's the something and it's
> surrounding process?], and I'm guessing all reviewed by/{language}
> translation comments get removed.

The English documents (xml) are authoritative. They have the 
LastChangedRevision property set which is filled inside a comment by every 
subversion checkout.

The translator of a document adds a comment to the translation, which 
contains the svn revision number the translation is based on (also within a 
comment, with a specified format).

The build system does the following on every run:

- check which translations exist
- check the LastChangedRevision of the english documents against each 
  translation and if it is different from the translator's comment, the 
  comment is changed to contain the original translator's base revision and 
  the new one of the english document. These generated comments are further
  modified if the English document changes again.
- generate "meta" files for each english document, which contain the 
  information collected above: which languages are available and whether 
  they are outdated or not.
- these meta files are included in the transformation process (xslt) to 
  generate the final html files and typemaps for mod_negotiation.
- Additionally for each language build a list of outdated files is emitted 
  on the console.

[...]


> 6. Something transforms the xml into html.  When transforming,
>   - if there is a (outdated) reference, the language site gets a "This
> translation may be out of date" message on the relevant pages (including
> the index).

Yes.

>   - the html files are copied to a language directory, removing the
> .{language_code} from the file name in the process
> [when does this happen? Is the priority to get a better English version
> out quickly or give other languages a chance to catch up before a push
> date?]

See the typemap story above. There's a corresponding httpd configuration 
working with the typemaps.

nd

---------------------------------------------------------------------
To unsubscribe, e-mail: docs-unsubscribe@httpd.apache.org
For additional commands, e-mail: docs-help@httpd.apache.org


Re: document translation infrastructure?

Posted by Miles Libbey <ml...@apache.org>.
Hi folks-
I work in the Traffic Server incubator project, and have had our first 
request to translate documentation from English into Korean.

I think I missing something in the process that Paul describes.  Sounds 
like:
1. Someone makes a change to the English docbook/xml file, and submits a 
patch.
2. The patch gets reviewed, and assuming high quality changes, gets 
committed
3. something happens in which all the xml.{language_code} files get a 
new "English Revision" comment [what's the something and it's 
surrounding process?], and I'm guessing all reviewed by/{language} 
translation comments get removed.
4. Translators run something like
  svn update
  grep "(outdated) -->" *.xml.fr
to get a list of files that are outdated.  [is there something that 
prompts translators to do this?]
5. Translators submit patches (including changing the "English Revision" 
comment to remove the outdated reference, and their name in a 
translation comment), a second person reviews.  Assuming high quality 
changes the second person adds their name to a reviewed by comment, and 
the change is checked in.
6. Something transforms the xml into html.  When transforming,
  - if there is a (outdated) reference, the language site gets a "This 
translation may be out of date" message on the relevant pages (including 
the index).
  - the html files are copied to a language directory, removing the 
.{language_code} from the file name in the process
[when does this happen? Is the priority to get a better English version 
out quickly or give other languages a chance to catch up before a push 
date?]

Roughly correct?

Our current documentation is HTML based -- is there anything about the 
httpd doc process that could not be done for html (vs xml)?

Thanks!
miles libbey

Paul Querna may have written the following on 11/25/09 11:41 AM:
> (adding docs@httpd cc)
>
> On Tue, Nov 24, 2009 at 10:20 AM, Miles Libbey<ml...@apache.org>  wrote:
>> Hi folks-
>> We have a volunteer to translate our documentation from English into Korean.
>>   Any recommendations for translation management/infrastructure? That is-- as
>> the english documentation changes, is there any software that can help to
>> find out of date or new strings/sections?
>
> I would recommend looking at or copying how the httpd project handles
> documentation translation.
>
> <http://httpd.apache.org/docs-project/docsformat.html>   Explains some
> of the basics.
>
> For translations, the build keeps track of what subversion revs
> changes a english version of the document, and then modifies the
> non-english translations with information about the missing revisions.
>   On the generated output, it also automatically adds a banner saying
> that the file is out of date compared to the english version.
>
> A concrete example:
> <https://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/manual/bind.xml>
> is the current english version of the bind() docs.'
>
> the meta file keeps track of which translations are outdated:
> <https://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/manual/bind.xml.meta>
>
> If you look at the german translation:
> <https://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/manual/bind.xml.de>
> You can see it keeps a comment at the top of the file, tracking the
> SVN revisions the english version has over the german version:
> <!-- English Revision: 420990:587444 (outdated) -->
>
> For the translater, they can then run svn log/diff over that rev range
> and update their translation.
>
> This system seems to work pretty well for docs@httpd, and I imagine it
> could be adopted to raw HTML.
>
> Someone from docs@httpd could likely explain it better....

---------------------------------------------------------------------
To unsubscribe, e-mail: docs-unsubscribe@httpd.apache.org
For additional commands, e-mail: docs-help@httpd.apache.org


Re: document translation infrastructure?

Posted by Paul Querna <pa...@querna.org>.
(adding docs@httpd cc)

On Tue, Nov 24, 2009 at 10:20 AM, Miles Libbey <ml...@apache.org> wrote:
> Hi folks-
> We have a volunteer to translate our documentation from English into Korean.
>  Any recommendations for translation management/infrastructure? That is-- as
> the english documentation changes, is there any software that can help to
> find out of date or new strings/sections?

I would recommend looking at or copying how the httpd project handles
documentation translation.

<http://httpd.apache.org/docs-project/docsformat.html>  Explains some
of the basics.

For translations, the build keeps track of what subversion revs
changes a english version of the document, and then modifies the
non-english translations with information about the missing revisions.
 On the generated output, it also automatically adds a banner saying
that the file is out of date compared to the english version.

A concrete example:
<https://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/manual/bind.xml>
is the current english version of the bind() docs.'

the meta file keeps track of which translations are outdated:
<https://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/manual/bind.xml.meta>

If you look at the german translation:
<https://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/manual/bind.xml.de>
You can see it keeps a comment at the top of the file, tracking the
SVN revisions the english version has over the german version:
<!-- English Revision: 420990:587444 (outdated) -->

For the translater, they can then run svn log/diff over that rev range
and update their translation.

This system seems to work pretty well for docs@httpd, and I imagine it
could be adopted to raw HTML.

Someone from docs@httpd could likely explain it better....

---------------------------------------------------------------------
To unsubscribe, e-mail: docs-unsubscribe@httpd.apache.org
For additional commands, e-mail: docs-help@httpd.apache.org


Re: document translation infrastructure?

Posted by Paul Querna <pa...@querna.org>.
(adding docs@httpd cc)

On Tue, Nov 24, 2009 at 10:20 AM, Miles Libbey <ml...@apache.org> wrote:
> Hi folks-
> We have a volunteer to translate our documentation from English into Korean.
>  Any recommendations for translation management/infrastructure? That is-- as
> the english documentation changes, is there any software that can help to
> find out of date or new strings/sections?

I would recommend looking at or copying how the httpd project handles
documentation translation.

<http://httpd.apache.org/docs-project/docsformat.html>  Explains some
of the basics.

For translations, the build keeps track of what subversion revs
changes a english version of the document, and then modifies the
non-english translations with information about the missing revisions.
 On the generated output, it also automatically adds a banner saying
that the file is out of date compared to the english version.

A concrete example:
<https://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/manual/bind.xml>
is the current english version of the bind() docs.'

the meta file keeps track of which translations are outdated:
<https://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/manual/bind.xml.meta>

If you look at the german translation:
<https://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/manual/bind.xml.de>
You can see it keeps a comment at the top of the file, tracking the
SVN revisions the english version has over the german version:
<!-- English Revision: 420990:587444 (outdated) -->

For the translater, they can then run svn log/diff over that rev range
and update their translation.

This system seems to work pretty well for docs@httpd, and I imagine it
could be adopted to raw HTML.

Someone from docs@httpd could likely explain it better....

Re: document translation infrastructure?

Posted by Paul Querna <pa...@querna.org>.
(adding docs@httpd cc)

On Tue, Nov 24, 2009 at 10:20 AM, Miles Libbey <ml...@apache.org> wrote:
> Hi folks-
> We have a volunteer to translate our documentation from English into Korean.
>  Any recommendations for translation management/infrastructure? That is-- as
> the english documentation changes, is there any software that can help to
> find out of date or new strings/sections?

I would recommend looking at or copying how the httpd project handles
documentation translation.

<http://httpd.apache.org/docs-project/docsformat.html>  Explains some
of the basics.

For translations, the build keeps track of what subversion revs
changes a english version of the document, and then modifies the
non-english translations with information about the missing revisions.
 On the generated output, it also automatically adds a banner saying
that the file is out of date compared to the english version.

A concrete example:
<https://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/manual/bind.xml>
is the current english version of the bind() docs.'

the meta file keeps track of which translations are outdated:
<https://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/manual/bind.xml.meta>

If you look at the german translation:
<https://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/manual/bind.xml.de>
You can see it keeps a comment at the top of the file, tracking the
SVN revisions the english version has over the german version:
<!-- English Revision: 420990:587444 (outdated) -->

For the translater, they can then run svn log/diff over that rev range
and update their translation.

This system seems to work pretty well for docs@httpd, and I imagine it
could be adopted to raw HTML.

Someone from docs@httpd could likely explain it better....

---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
For additional commands, e-mail: general-help@incubator.apache.org


Re: document translation infrastructure?

Posted by Niclas Hedhman <ni...@hedhman.org>.
On Wed, Nov 25, 2009 at 2:20 AM, Miles Libbey <ml...@apache.org> wrote:
> Hi folks-
> We have a volunteer to translate our documentation from English into Korean.
>  Any recommendations for translation management/infrastructure? That is-- as
> the english documentation changes, is there any software that can help to
> find out of date or new strings/sections?

Wouldn't this first of all depend a lot on the documentation source?

If you are using XML based source system, then shouldn't the xml:lang
namespace attribute feature be able to assist?



Cheers
-- 
Niclas Hedhman, Software Developer
http://www.qi4j.org - New Energy for Java

I  live here; http://tinyurl.com/2qq9er
I  work here; http://tinyurl.com/2ymelc
I relax here; http://tinyurl.com/2cgsug

---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
For additional commands, e-mail: general-help@incubator.apache.org


Re: document translation infrastructure?

Posted by Andrus Adamchik <an...@objectstyle.org>.
Many projects are using cwiki.apache.org/confluence to maintain  
documentation. Confluence has version tracking with visual diff  
display. Probably also possible to setup an RSS feed to watch the  
pages that changed. So that should be all you need.

Andrus

On Nov 24, 2009, at 8:20 PM, Miles Libbey wrote:

> Hi folks-
> We have a volunteer to translate our documentation from English into  
> Korean.  Any recommendations for translation management/ 
> infrastructure? That is-- as the english documentation changes, is  
> there any software that can help to find out of date or new strings/ 
> sections?
>
> thanks,
> miles libbey
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
> For additional commands, e-mail: general-help@incubator.apache.org
>
>


---------------------------------------------------------------------
To unsubscribe, e-mail: general-unsubscribe@incubator.apache.org
For additional commands, e-mail: general-help@incubator.apache.org