You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@maven.apache.org by Ben Walding <be...@walding.com> on 2003/05/27 07:41:50 UTC

Re: Remote repo handling (Was maven-new simple patches)

See below

>>>maven as it is, I presume that the directories were created in some 
>>>other point. So maybe I have to create the directories on my own in 
>>>the Test, or, if there is no place yet that creates the 
>>>      
>>>
>>directories in 
>>    
>>
>>>maven-new, this can be a good place, since it also writes the 
>>>artifact.
>>>      
>>>
>>We might want to place this logic somewhere higher up in the 
>>food chain so that each downloader doesn't have to duplicate 
>>the logic of creating the necessary directories. But I'll take a look.
>>    
>>
>
>The satisfier maybe a good place. But letting it at the Downloader seems
>a good idea, because this error was being catch by the catch at the
>Downloader for FileNotFound, which has a silent death (because it is
>still iterating through repositories). This FileNotFound signs the
>output writting problem also, so it is a bad catch...
>  
>

Is the downloader checking MD5s / signatures? 

If not, then it can't be guaranteed that the what the downloader has 
downloaded should actually be kept.  As such, I think the downloader 
should be told to download to a temporary location. If the post download 
verification checks pass (MD5 / sig / something else), then it might be 
reasonable for the satisfier to transfer the file into the local repo 
(or back up to some other remote repos - more on this at some later 
point). This transfer phase would create dirs as required.

Does the downloader know about multiple repos and SNAPSHOTs? Or is this 
the domain of the Satisfier?

I think the Downloader should be good at one thing - downloading. It 
shouldn't be doing hash checking, or even know about multiple repos.  
This fits fairly closely to what I created in Fetch - and hence a 
Downloader "should" only be a thin Avalon veneer around Fetch.



Cheers,

Ben



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
For additional commands, e-mail: dev-help@maven.apache.org


RE: Remote repo handling (Was maven-new simple patches)

Posted by Michal Maczka <mi...@cqs.ch>.
There is misunderstand here.

I want to define clear roles for DependecySatisfier and ArtifactDownloader
(the names surly can be changed if they are inaccurate).
Those roles should not overlap. ArtifactDownloader should be a __black box__
for DependecySatisfier.

Please take a look at MockArtifactDownloder and
DefaultDependecySaftisfierTest to understand
why this is important.


Surly I see ArtifactDownloader as a conglomeration of other auxiliary
classes  (like Fetchers, Verifiers etc.)
not as monolith. So each aspect of it's functionality it can be deeply
tested.
I made the decomposition and I don't see a reason to come back now.

I still think that the role which is previewed for ArtifactDownloader in
maven-new
(how the interface does looks like) is correct.

So what Ben, you and Rafal are requesting ... I am agreeing with
everything... but still want
to keep the interface in the current form.

What ever Brian described should be hidden inside of ArtifactDownloader and
not visible outside.

So I have no doubts about your intentions ...just don't want to add anything
to DependecySatisfier
to keep it small and easier to test ( I made some testcases alredy).
So for me ArtifactDownloader is responsible to deliver missing dependency to
the repository if DependySatisfier requests such action. This is the
contract I see for this interface.
How it is internally realized ..for me it's different story.


I hope that I explained it clearly
Michal


> -----Original Message-----
> From: Brian Ewins [mailto:Brian.Ewins@i-documentsystems.com]
> Sent: Tuesday, May 27, 2003 3:44 PM
> To: Maven Developers List
> Subject: Re: Remote repo handling (Was maven-new simple patches)
>
>
>
>
> Rafal Krzewski wrote:
>
> > Michal Maczka wrote:
> >
> >>>It shouldn't be doing hash checking,
> >>
> >>-1
> >>Other reason: the fact that we are using MD5 sum checking should be
> >>"internal" for downloader.
> >>Maybe not every downloader should do this check.
> >>For example we can imagine downloader using SSL connection to the
> >>repository.
> >>SSL protocol has alredy built in mechanisms for checking data integrity,
> >>so no post-download checking will be required.
> > Wrong assumption IMO. MD5 sums are for verifying repository integrity,
> > not transfer integrity. When you download a file it's easy to tell if
> > you got an IOException or not. On the other hand, if you see a file
> > in the repository it's not easy to tell if the person who was uploading
> > it wasn't disconnected in the middle, lest you know the file length
> > somehow. MD5 is a good way of verifying that + plus gives you some
> > extra confidence in transport integrity beyound that what is given
> > by wire protocols.
>
> Its worth pointing out that when we had the recent downtime on ibiblio,
> all downloads appeared to succeed, but none of the downloads were
> correct (they were all of the holding page). Without checking the md5
> held on the repo this would be impossible to detect, and SSL wouldn't
> have helped.
>
> I'm not convinced by the way the 'temp' stuff works now (as in b10,
> maven-new doesnt seem to use temp files). Suppose we have snapshot jars
> in multiple remote repos. From what I've seen the downloader would
> download to a temp location then transfer to the permanent location. If
> this is done, the following sequence is possibe:
> Download correct snap from repo 1
> Download correct md5 from repo 1
> Download more recent holding page from repo 2 - replace local snap
> Download broken md5 from repo 2
> Check md5. whoops.
>
> Now the snapshot is broken unnecessarily. It is important to check the
> md5s *before* the download is transferred to the local repo.
>
> IMHO the downloader should never be told the 'real' location that the
> artifact should be downloaded to, but only a temp location, which the
> downloader is told (currently the b10 download code makes up the temp
> location for itself). The process should be like this pseudocode:
>
> # to download 'artifact' to 'local'
> # postCondition: local copy with verified md5 or no local copy
> if !artifact.isSnapshot()
> 	if local.exists()
> 		# already have the release
> 		return;
> found = false
> foreach repo in (remote repos)
> 	# next two lines are 2 separate calls to downloader
> 	download(repo, artifact, temp)
> 	download(repo, artifact.md5, temp.md5)
> 	if verify(temp, temp.md5)
> 		if !local.exists()
> 		|| temp is more recent than local
> 			copy temp to local
> 			copy temp.md5 to local.md5
> 			found = true
> 	remove temp
> 	remove temp.md5
> 	if found && !artifact.isSnapshot()
> 		# get first copy of releases only
> 		break;
> return
>
> if the md5 file is instead a metadata file, which included a timestamp
> or sequence no, then instead I'd do this:
>
> # to download 'artifact' to 'local'
> # postCondition: local copy with verified md5 or no local copy
> if !artifact.isSnapshot()
> 	if local.exists()
> 		# already have the release
> 		return;
> found = false
> foreach repo in (remote repos)
> 	# next two lines are 2 separate calls to downloader
> 	download(repo, artifact, temp)
> 	download(repo, artifact.meta, temp.meta)
> 	# copy function checks md5, timestamp, seqno, etc.
> 	# all from the metadata file.
> 	found = copy(temp, temp.meta, local, local.meta)
> 	remove temp
> 	remove temp.meta
> 	if found && !artifact.isSnapshot()
> 		# get first copy of releases only
> 		break;
> return
>
> Maven-new currently isn't using tempfiles at all (as far as I can see)
> so will be prone to 'good snap replaced by bad snap' problems. Maven-old
> is prone to the same thing, but for a different reason: the md5 isn't
> checked before the temp is copied to the local repo.
>
> -Baz
>
>
>
> Privacy and Confidentiality Notice
>
> ------------------------------------------------
>
> The information contained in this E-Mail message is intended only
> for the person or persons to whom it is addressed.  Such
> information is confidential and privileged and no mistake in
> transmission is intended to waive or compromise such privilege.
> If you have received it in error, please destroy it and notify us
> on the telephone number printed above.  If you do not receive
> complete and legible copies, please telephone us immediately. Any
> opinions expressed herein including attachments are those of the
> author only. i-documentsystems Ltd. does not accept
> responsibility for the accuracy or completeness of the
> information provided or for any changes to this Email, however
> made, after it was sent. (Please note that it is your
> responsibility to scan this message for viruses).
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
> For additional commands, e-mail: dev-help@maven.apache.org
>


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
For additional commands, e-mail: dev-help@maven.apache.org


Re: Remote repo handling (Was maven-new simple patches)

Posted by Brian Ewins <Br...@i-documentsystems.com>.

Rafal Krzewski wrote:

> Michal Maczka wrote:
> 
>>>It shouldn't be doing hash checking,
>>
>>-1
>>Other reason: the fact that we are using MD5 sum checking should be
>>"internal" for downloader.
>>Maybe not every downloader should do this check.
>>For example we can imagine downloader using SSL connection to the
>>repository.
>>SSL protocol has alredy built in mechanisms for checking data integrity,
>>so no post-download checking will be required.
> Wrong assumption IMO. MD5 sums are for verifying repository integrity,
> not transfer integrity. When you download a file it's easy to tell if
> you got an IOException or not. On the other hand, if you see a file
> in the repository it's not easy to tell if the person who was uploading
> it wasn't disconnected in the middle, lest you know the file length
> somehow. MD5 is a good way of verifying that + plus gives you some
> extra confidence in transport integrity beyound that what is given
> by wire protocols.

Its worth pointing out that when we had the recent downtime on ibiblio, 
all downloads appeared to succeed, but none of the downloads were 
correct (they were all of the holding page). Without checking the md5 
held on the repo this would be impossible to detect, and SSL wouldn't 
have helped.

I'm not convinced by the way the 'temp' stuff works now (as in b10, 
maven-new doesnt seem to use temp files). Suppose we have snapshot jars 
in multiple remote repos. From what I've seen the downloader would 
download to a temp location then transfer to the permanent location. If 
this is done, the following sequence is possibe:
Download correct snap from repo 1
Download correct md5 from repo 1
Download more recent holding page from repo 2 - replace local snap
Download broken md5 from repo 2
Check md5. whoops.

Now the snapshot is broken unnecessarily. It is important to check the 
md5s *before* the download is transferred to the local repo.

IMHO the downloader should never be told the 'real' location that the 
artifact should be downloaded to, but only a temp location, which the 
downloader is told (currently the b10 download code makes up the temp 
location for itself). The process should be like this pseudocode:

# to download 'artifact' to 'local'
# postCondition: local copy with verified md5 or no local copy
if !artifact.isSnapshot()
	if local.exists()
		# already have the release
		return;
found = false
foreach repo in (remote repos)
	# next two lines are 2 separate calls to downloader
	download(repo, artifact, temp)
	download(repo, artifact.md5, temp.md5)
	if verify(temp, temp.md5)
		if !local.exists()
		|| temp is more recent than local
			copy temp to local
			copy temp.md5 to local.md5
			found = true
	remove temp
	remove temp.md5
	if found && !artifact.isSnapshot()
		# get first copy of releases only
		break;
return

if the md5 file is instead a metadata file, which included a timestamp 
or sequence no, then instead I'd do this:

# to download 'artifact' to 'local'
# postCondition: local copy with verified md5 or no local copy
if !artifact.isSnapshot()
	if local.exists()
		# already have the release
		return;
found = false
foreach repo in (remote repos)
	# next two lines are 2 separate calls to downloader
	download(repo, artifact, temp)
	download(repo, artifact.meta, temp.meta)
	# copy function checks md5, timestamp, seqno, etc.
	# all from the metadata file.
	found = copy(temp, temp.meta, local, local.meta)		
	remove temp
	remove temp.meta
	if found && !artifact.isSnapshot()
		# get first copy of releases only
		break;
return

Maven-new currently isn't using tempfiles at all (as far as I can see) 
so will be prone to 'good snap replaced by bad snap' problems. Maven-old 
is prone to the same thing, but for a different reason: the md5 isn't 
checked before the temp is copied to the local repo.

-Baz



Privacy and Confidentiality Notice

------------------------------------------------

The information contained in this E-Mail message is intended only for the person or persons to whom it is addressed.  Such information is confidential and privileged and no mistake in transmission is intended to waive or compromise such privilege.  If you have received it in error, please destroy it and notify us on the telephone number printed above.  If you do not receive complete and legible copies, please telephone us immediately. Any opinions expressed herein including attachments are those of the author only. i-documentsystems Ltd. does not accept responsibility for the accuracy or completeness of the information provided or for any changes to this Email, however made, after it was sent. (Please note that it is your responsibility to scan this message for viruses).


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
For additional commands, e-mail: dev-help@maven.apache.org


RE: Remote repo handling (Was maven-new simple patches)

Posted by Michal Maczka <mi...@cqs.ch>.

> -----Original Message-----
> From: Rafal Krzewski [mailto:Rafal.Krzewski@caltha.pl]
> Sent: Tuesday, May 27, 2003 1:04 PM
> To: Maven Developers List
> Subject: Re: Remote repo handling (Was maven-new simple patches)
>
>
> Michal Maczka wrote:
> >>It shouldn't be doing hash checking,
> > -1
> >
> > Downloader should report if its job was done correctly. I see
> checking of
> > MD5 sums
> > as a integral part of this process. For example if the file which was
> > downloded is corrupted
> > downloader can try to re-download it one more time.
>
> How often does that kind of thing happen to you? TCP/IP is damn good
> at preserving integrity of data it transfers.
> If you download the whole file (no IOException) and there is a MD5
> mismatch, you can safely assume that the file was deployed incorrectly,
> or tampered with (by a silly attacker, becuse unsigned MD5 sum is
> trivial to recreate for modified file). I think re-downloading in such
> situation is a waste of time and bandwith in 99.9% cases.
>
> > So once downloader finishes it should either:
> > - report a  sucesses and this means for DependecySatisfier( or whatever)
> > that requested file is in repository
> >  or
> > - report failure and clean everything.
>
> I second Ben's opinion that downloader should write to a temporary
> location. Cleaning up in case of failure is good behaviour of course.
>

I have nothing agaist it. But it should be trasparent for other classes.
This should be internal problem of downloader how it will realize the
"interface contract". And for me this contract is a bit different then for
Ben
and includes md5 checksum checking by ArtifactDownloader.

> > Other reason: the fact that we are using MD5 sum checking should be
> > "internal" for downloader.
> > Maybe not every downloader should do this check.
> > For example we can imagine downloader using SSL connection to the
> > repository.
> > SSL protocol has alredy built in mechanisms for checking data integrity,
> > so no post-download checking will be required.
>
> Wrong assumption IMO. MD5 sums are for verifying repository integrity,
> not transfer integrity. When you download a file it's easy to tell if
> you got an IOException or not. On the other hand, if you see a file
> in the repository it's not easy to tell if the person who was uploading
> it wasn't disconnected in the middle, lest you know the file length
> somehow. MD5 is a good way of verifying that + plus gives you some
> extra confidence in transport integrity beyound that what is given
> by wire protocols.
>
> >>or even know about multiple repos.
> >
> >
> > +1
>
> Definetely.
>
>
> R.


Michal


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
For additional commands, e-mail: dev-help@maven.apache.org


Re: Remote repo handling (Was maven-new simple patches)

Posted by Rafal Krzewski <Ra...@caltha.pl>.
Michal Maczka wrote:
>>It shouldn't be doing hash checking,
> -1
> 
> Downloader should report if its job was done correctly. I see checking of
> MD5 sums
> as a integral part of this process. For example if the file which was
> downloded is corrupted
> downloader can try to re-download it one more time.

How often does that kind of thing happen to you? TCP/IP is damn good
at preserving integrity of data it transfers.
If you download the whole file (no IOException) and there is a MD5
mismatch, you can safely assume that the file was deployed incorrectly,
or tampered with (by a silly attacker, becuse unsigned MD5 sum is
trivial to recreate for modified file). I think re-downloading in such
situation is a waste of time and bandwith in 99.9% cases.

> So once downloader finishes it should either:
> - report a  sucesses and this means for DependecySatisfier( or whatever)
> that requested file is in repository
>  or
> - report failure and clean everything.

I second Ben's opinion that downloader should write to a temporary
location. Cleaning up in case of failure is good behaviour of course.

> Other reason: the fact that we are using MD5 sum checking should be
> "internal" for downloader.
> Maybe not every downloader should do this check.
> For example we can imagine downloader using SSL connection to the
> repository.
> SSL protocol has alredy built in mechanisms for checking data integrity,
> so no post-download checking will be required.

Wrong assumption IMO. MD5 sums are for verifying repository integrity,
not transfer integrity. When you download a file it's easy to tell if
you got an IOException or not. On the other hand, if you see a file
in the repository it's not easy to tell if the person who was uploading
it wasn't disconnected in the middle, lest you know the file length
somehow. MD5 is a good way of verifying that + plus gives you some
extra confidence in transport integrity beyound that what is given
by wire protocols.

>>or even know about multiple repos.
> 
> 
> +1

Definetely.


R.


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
For additional commands, e-mail: dev-help@maven.apache.org


RE: Remote repo handling (Was maven-new simple patches)

Posted by Michal Maczka <mi...@cqs.ch>.
Forget how it is now in maven-new (just copy/paste from 'old 'maven).

I really think that Downloader should be a "black box" for downloading
files.
Once asked to do so it should do all the work. And report error if it fails
for some reason, so for example download from the next repository in the
chain can be made.

Once downloader finishes we either

a) have a valid (checked - whatever it means) file in the local repository.
   It should be hidden that md5 files are used (maybe even we don't need
them in local repository..)
b) have to try with other location (repository) or report error.

Only such "black boxing" will let us to easily test other components of the
system
and know how they will behave in any case and how resilient they are.
This will let us write various simulations and test different scenarios.


Michal



> -----Original Message-----
> From: Paulo Silveira [mailto:paulo@paulo.com.br]
> Sent: Tuesday, May 27, 2003 1:07 PM
> To: 'Maven Developers List'
> Subject: RE: Remote repo handling (Was maven-new simple patches)
>
>
> hello
>
> > -----Original Message-----
> > From: Michal Maczka [mailto:michal.maczka@cqs.ch]
> > Sent: terça-feira, 27 de maio de 2003 07:41
> > To: Maven Developers List
> > Subject: RE: Remote repo handling (Was maven-new simple patches)
> >
> > >
> > > Does the downloader know about multiple repos and SNAPSHOTs? Or is
> > > this the domain of the Satisfier?
>
> The satisfier deals a little with the snapshots, adding them as
> failures, so the Downloader always get them.
>
> > >
> > > I think the Downloader should be good at one thing - downloading.
> >
> > +1
>
> For now, the Downloader knows about the remote repos and iterate among
> them. I also prefer Bens idea. The downloader should only receive an
> Artifact reference (probably no Project reference needed, maybe a Proxy
> properties), and, for example, return a File reference. Internally it
> outputs to a File.createTempFile(), and returns it, or throws an
> exception.
>
> The satisfier can do the repo iteration and directories creation (or
> maybe even the Processor could create them). And if the download
> success, copies the file to the right place.
>
> In the other mail I sent a patch to create the dirs, but the Satisfier
> already does that before getting the dependencies through the
> Downloader.
>
>
> >
> > Other reason: the fact that we are using MD5 sum checking
> > should be "internal" for downloader. Maybe not every
> > downloader should do this check. For example we can imagine
> > downloader using SSL connection to the repository. SSL
> > protocol has alredy built in mechanisms for checking data
> > integrity, so no post-download checking will be required.
> >
>
> Michal, but for now, it seems that the files are only checked by the
> Verifier after every artifact gets downloaded (satisfied to be more
> precise). The Processor calls it.
>
> bye
>
> >
> >
> > >or even know about multiple repos.
> >
> > +1
> >
> > > This fits fairly closely to what I created in Fetch - and hence a
> > > Downloader "should" only be a thin Avalon veneer around Fetch.
> > >
> > >
> > >
> > > Cheers,
> > >
> > > Ben
> > >
> >
> > regards
> >
> > Michal
> >
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
> > For additional commands, e-mail: dev-help@maven.apache.org
> >
> >
> >
>
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
> For additional commands, e-mail: dev-help@maven.apache.org
>


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
For additional commands, e-mail: dev-help@maven.apache.org


RE: Remote repo handling (Was maven-new simple patches)

Posted by Paulo Silveira <pa...@paulo.com.br>.
hello

> -----Original Message-----
> From: Michal Maczka [mailto:michal.maczka@cqs.ch] 
> Sent: terça-feira, 27 de maio de 2003 07:41
> To: Maven Developers List
> Subject: RE: Remote repo handling (Was maven-new simple patches)
> 
> >
> > Does the downloader know about multiple repos and SNAPSHOTs? Or is 
> > this the domain of the Satisfier?

The satisfier deals a little with the snapshots, adding them as
failures, so the Downloader always get them.

> >
> > I think the Downloader should be good at one thing - downloading.
> 
> +1

For now, the Downloader knows about the remote repos and iterate among
them. I also prefer Bens idea. The downloader should only receive an
Artifact reference (probably no Project reference needed, maybe a Proxy
properties), and, for example, return a File reference. Internally it
outputs to a File.createTempFile(), and returns it, or throws an
exception.

The satisfier can do the repo iteration and directories creation (or
maybe even the Processor could create them). And if the download
success, copies the file to the right place.

In the other mail I sent a patch to create the dirs, but the Satisfier
already does that before getting the dependencies through the
Downloader.


> 
> Other reason: the fact that we are using MD5 sum checking 
> should be "internal" for downloader. Maybe not every 
> downloader should do this check. For example we can imagine 
> downloader using SSL connection to the repository. SSL 
> protocol has alredy built in mechanisms for checking data 
> integrity, so no post-download checking will be required.
> 

Michal, but for now, it seems that the files are only checked by the
Verifier after every artifact gets downloaded (satisfied to be more
precise). The Processor calls it.

bye

> 
> 
> >or even know about multiple repos.
> 
> +1
> 
> > This fits fairly closely to what I created in Fetch - and hence a 
> > Downloader "should" only be a thin Avalon veneer around Fetch.
> >
> >
> >
> > Cheers,
> >
> > Ben
> >
> 
> regards
> 
> Michal
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
> For additional commands, e-mail: dev-help@maven.apache.org
> 
> 
> 



---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
For additional commands, e-mail: dev-help@maven.apache.org


RE: Remote repo handling (Was maven-new simple patches)

Posted by Michal Maczka <mi...@cqs.ch>.

> -----Original Message-----
> From: Ben Walding [mailto:ben@walding.com]
> Sent: Tuesday, May 27, 2003 7:42 AM
> To: Maven Developers List
> Subject: Re: Remote repo handling (Was maven-new simple patches)
>
>
> See below
>
> >>>maven as it is, I presume that the directories were created in some
> >>>other point. So maybe I have to create the directories on my own in
> >>>the Test, or, if there is no place yet that creates the
> >>>
> >>>
> >>directories in
> >>
> >>
> >>>maven-new, this can be a good place, since it also writes the
> >>>artifact.
> >>>
> >>>
> >>We might want to place this logic somewhere higher up in the
> >>food chain so that each downloader doesn't have to duplicate
> >>the logic of creating the necessary directories. But I'll take a look.
> >>
> >>
> >
> >The satisfier maybe a good place. But letting it at the Downloader seems
> >a good idea, because this error was being catch by the catch at the
> >Downloader for FileNotFound, which has a silent death (because it is
> >still iterating through repositories). This FileNotFound signs the
> >output writting problem also, so it is a bad catch...
> >
> >
>
> Is the downloader checking MD5s / signatures?
>
> If not, then it can't be guaranteed that the what the downloader has
> downloaded should actually be kept.  As such, I think the downloader
> should be told to download to a temporary location. If the post download
> verification checks pass (MD5 / sig / something else), then it might be
> reasonable for the satisfier to transfer the file into the local repo
> (or back up to some other remote repos - more on this at some later
> point). This transfer phase would create dirs as required.
>
> Does the downloader know about multiple repos and SNAPSHOTs? Or is this
> the domain of the Satisfier?
>
> I think the Downloader should be good at one thing - downloading.

+1


> It shouldn't be doing hash checking,

-1

Downloader should report if its job was done correctly. I see checking of
MD5 sums
as a integral part of this process. For example if the file which was
downloded is corrupted
downloader can try to re-download it one more time.
So once downloader finishes it should either:
- report a  sucesses and this means for DependecySatisfier( or whatever)
that requested file is in repository
 or
- report failure and clean everything.


Other reason: the fact that we are using MD5 sum checking should be
"internal" for downloader.
Maybe not every downloader should do this check.
For example we can imagine downloader using SSL connection to the
repository.
SSL protocol has alredy built in mechanisms for checking data integrity,
so no post-download checking will be required.



>or even know about multiple repos.

+1

> This fits fairly closely to what I created in Fetch - and hence a
> Downloader "should" only be a thin Avalon veneer around Fetch.
>
>
>
> Cheers,
>
> Ben
>

regards

Michal


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@maven.apache.org
For additional commands, e-mail: dev-help@maven.apache.org