24/25 January 2002 ICRB meeting minutes
Meeting's summary by the chair
- We are moving very fast to
an effective CDF GRID. There will be computers in various places
that we will want to interconnect quite tighlty. The success of
this will be based on
both we do not have now. Current work on software is moving
in this direction.
- Several institutions have now a
common plan: small farms in the trailers with a few TB disk
and at least one dual PC per physicist. Notable exceptions are, of
course, overseas: UK, Italy, Korea, Spain are all planning (with
somehow different time scales) to have large farms over there.
There are also US institutions with lots of computers at home.
There are at present mixed feelings about pooling CDF hardware
toghether as a shared resource across institutions. Some are already
doing it, some are considering it, some are against.
Some intitutions who have no plan to pool hardware resources with
FCC ones are willing to think it over once a clear plan is layed out and
advantages (if any) are made apparent.
This situation may thus evolve once the new CAF is in place
and data flows are better understood. Having the hardware in the
trailers offers the clear advantage of resource control, while putting it
at FCC offers professional opration/mainteance and possibly faster
access to the tape robots, besides freeing physicists from needing
to be part-time sys.manager and some noise/heat considerations,
but the hardware has to become at least
partially of common usage. An example of such an arrangement is
the Italian disk at FCC.
Expectations for data copying offsite are large and can only
be satisfied with a significant upgrade of the Fermilab
connectivity. It is very unlikley that a common tape copy
facillity could be setup to provide similar capacity, so
in the lack of better network people may have to resort
to self-managed tapes in the trailers.
It is the chair's opinion that
CDF must move relentlessy and aggressively to obain the needed network
upgrades to avoid the Fnal/world connection to become
a bottleneck. CDF analysis is very likley to be resource-limited
in the coming years and logistic imposes a limit to what can be
done at FNAL. Being able to transparenlty move data and/or jobs
to other places may be the only way to effetively add
manpower to hardware installation/operation/maintenance.
Besides all the usual GRID propaganda, and besides the fact that
all in all Fnal may not be able to buy al the hardware that we
may want here and especially other countries will have much much
more computing power at home then they will ever add to FCC.
- Next meeting will not overlap with collaboration meeting.
Short collection of items raised in the discussion:
A few points from the presentations that are not be available
in electronic format.
- RedHat 7.2 CDF code works on RedHat 7.2
(called FermiLinux 7.1, see above about
numberings) starting with version 4.2.0. See
Art's log or
The code management group looks forward to being able to drop
support for RedHat 6 to diminuish their work load. Therefore
everybody is encouraged to migrate to RedHat 7 if convenient.
On the other hand Level 3
and production farm currently run RedHat 6, so there is no saying when
dropping support for RH6 may be considered.
Also Fermilab expert Connie Sieh advice
that some older hardware
is better off with RedHat 6. In particular machines with Adaptec SCSI
interfaces, Connie claims their performance under RedHat 7 is
- gcc It compiles, but it does not link. The person working on this
is phasing out (see Kevin's talk at collab.meeting).
It will have to be picked up by interested institutions.
Libraries are about 8x larger.
Nothing is known about executable sizes and/or speed until an
image can be linked.
Last modified: Fri Feb 1 16:30:39 MET