sous logic aussi ca devrait fonctioné et même sur cakewalk.
J'avais effectivement tenté de régler ces crachouilles en compensant par une latence plus grande, mais là pour jouer les parties, c'est pas franchement évident (bien evidement je pense que cela peut être envisageable si tu écris sur partition où compose en pas à pas).
Je me souviens avoir aussi fait des essais sur la gestion de la ram (le mode DFD, le gestionnaire de preload quoi... j'avais pas réussi grand chose de plus).
Mais bon il faut dire que je n'avais pas non plus poussé bcp plus loin dans l'exploitation de la banque siedlaczek, c'est vrai que le prix reste un argument intéressant!!!
Je te laisse de nouveau mon sentiment par rapport à cette banque: de beaux échantillons comportant une bonne gestion des articulations ( bcp de ghost notes pour les changements de samples, donc une jouabilité temps réelle interessante), mais je pense que ces qualités en font aussi un piège du fait de la lourdeur des échantillons chargés...donc plus de ram à prévoir ( faut certainement faire le calcul achat ram en sus avec le prix de vente...mais ça doit quand même rester interessant quand au rapport Qualité + prix.).
Faudrait peut-être essayer de glaner d'autres infos, non ?
(je ne me rappelle plus la source, desolé)
(et c'est en anglais)
What is DFD?
DFD stands for "direct from disk" and means a technique for playing back large and very large instruments and samples without loading them entirely into RAM. In fact, only a certain start portion of each sample is loaded into RAM permanently, the rest is read from the computers hard disk while playing the instrument.
With DFD switched on, we can load samples with up to 2 Gigabyte each - and we can load quite a lot of them even with moderately equipped machines. Later, we'll learn how things work together and we'll see how much RAM we actually need and how we have to set up our buffer sizes.
To understand how things work is beneficial if we want to achieve the best possible performance out of hard- and software.
So why do we need RAM at all, if we can fetch things directly from the hard disk? The answer is:
we can't. At least not as quickly as we'd need to. If we strike a note on our keyboard, we expect to hear sound immediately. Precisely spoken, "immediately" is not possible in the universe as we know it, but we expect something which sounds "immediately" to our ears, which would be somewhere in the range 1 - 10 ms. You'll probably know these values - it's called "latency" and it also serves as a mark how good your soundcard is (the good ones have pretty low latencies down to 1,5 ms and less).
Ok, so we need to hear something about 1,5 ms after the key is pressed (if we had a good pro soundcard, which is able to deliver that latency).
To make it short, there is no hard disk which is that quick (the fastest disks available - designed to work in database servers - get down to about 4 ms). So the only way to solve the problem is keeping the start section of each sample in RAM to give the hard disk a head start.
We now have the following situation: after we strike a note, the initial portion of the sound is played back from the start section we have already in RAM. At the same time, the hard disk is started to load the rest of the sample. The additional time we gain here for the hard disk depends directly on the amount of memory we reserve for the start part. If that memory (I'll call it "buffer" from now on) is quite large, it is sufficient to play back for a longer time, which makes the hard disk's job obviously easier.
At this point, we've come to know our first important friend: the preload buffer. As we've seen, the preload buffer is needed once per sample (since each sample can be started by a keystroke and we don't know which one it will be in advance). So we have a first simple formula for our need of RAM:
· Number of samples ´ preload buffer size = instrument size.
Considering an instrument made up of 200 distinct samples and a preload buffer size of 192 kb, we'll end up in an instrument 37,5 megabytes big. Keep in mind, that the actual size of the samples does not matter at all. It's the same if they are 1 Gigabyte or 200 kb each.
So that's it, you might think, but obviously that's not the whole story (else I wouldn't say it that way). Let's go back to our hard disk, which is currently loading a portion of a sample to catch up with the preload buffer. It's important to understand where the hard disk is putting the data it reads. Obviously, we need another memory buffer for that. It could be the same size as is the preload buffer, but that's not necessarily required. An important question is: how many of these other buffers do we need? That's relatively easy: it depends on how many voices we plan to play simultaneously. If we have a maximum polyphony of - say - 100 voices, we need 100 of these buffers (we call them voice buffers therefore). It's very important to understand that these buffers must be always there, regardless how many samples or instruments are loaded. The next question then would be: why can we choose a different size for these buffers? The answer is a pragmatic one: there are instruments made of very many samples (several thousands), which would require quite some amount of memory - so we might want to chose rather small preload buffers. But that does not mean that our voice buffers must be equally small, since there are a lot less of them (note: generally, larger buffers tend to improve playback performance). For the same reason, we can select the number of "DFDable" voices we allow. We can save quite some memory here, if we restrict polyphony - which allows us to choose higher buffer sizes, hence better performance.
Now we know about everything that is to know about the theoretical basics behind DFD and we come to the following conclusions as far as memory is concerned:
· First thing we need is a bunch of preload buffers, as many as we have samples loaded.
· Then we need some voice buffers, precisely as many as we want voices to play.
· We can adjust each of the buffers' sizes independently to match our requirements and memory outfit
· We can adjust how many DFD voices we want to have - less voices can save quite some RAM additionally, it's important to remember that
· Voices buffers are always there, regardless if anything's loaded or not. The required space computes from the number of reserved DFD voices times the voice buffer size.
· Preload buffers can become quite numerous, if the loaded instrument (s) contain many samples. Preload memory consumption derives directly from the number of samples used, so it might be necessary to set the preload buffer size quite low, if the loaded instruments are large in terms of sample count.
Planning and a little experimentation should give you the ability to get maximum performance out of DFD. Let's start with the hardware:
The most critical part is of course the hard disk, and obviously we can gain a lot by choosing the fastest hard disk we can get. "Fast" in the hard disk world has basically two dimensions: transfer bandwidth and access time. Although transfer bandwidth is an important key, access time can be even more important. Current 3,5 inch desktop hard disks get values of about 40 megabytes transfer bandwidth per second and an average access time of about 10 ms. The said 40 MB/s would be sufficient for about 200 - 250 stereo voices @ 44.1kHz / 16Bit. But unfortunately there's still the access time to consider. Access time is the average time that is needed to address a randomly chosen target position on the hard disk (from any other previous position). And in fact, that's exactly what is happening when it comes to DFD: whenever several voices are playing simultaneously, many different (virtually randomly spread) data portions must be seeked for as quickly as possible, hence the less the access time, the better.
And now comes a very important point: the total size of the instruments on the hard disk. Let's consider two instruments (or multis): one might be a 600 Megabyte instrument, the other a 60 Gigabyte library. Given a properly defragmented hard disk, the 600 MB instrument will use only a very small portion of the disk (since 600MB is rather little for modern hard disks), whereas the 60 GB library will use a wide range of the disk. Since accessing a certain data portion is a mechanical procedure, it'll take a considerably longer seek time if the total amount of data is larger. So low access time becomes more and more crucial when it comes to large and very large libraries.
Another thing about access time is the fact, that it cannot be scaled, as can be the transfer rate. By using cheap and popular RAID arrays, the transfer rate can easily be raised to almost any degree, but access time will stay (pretty much) the same, even if we combine 4 or more single drives. So we come to the following, very important conclusion:
· If you plan to use large libraries, look for a hard disk with a really good access time.
Most desktop disk drives, as said before, are 7200 rpm devices and deliver access times of about 10 ms. But if you want to get high polyphony (150 - 250 stereo voices) out of large libraries, that may not be good enough. In such cases, you'll need a faster drive. Such drives are used for example in database servers and usually run at 10.000 or even 15.000 rpm (and they are not exactly cheap, unfortunately). These drives achieve access times down to 4 - 5 ms. The Western Digital "Raptor" is an example of a modern 10.000 rpm drive with such access times. If it comes to high end applications like huge libraries and high polyphony, you'll probably want to use a drive like these - or even two of them in a RAID configuration.
Another word on the hard disk interface: nowadays there's hardly any difference (for us) between connection standards like ATA, SCSI, FireWire or SATA. They are all able to transmit the data faster than any hard disk could deliver. Still, there might be differences in the way the hard disk is connected: the best possible performance is generally achieved with the disk being attached to the main internal connector. That does not mean that an external firewire disk will function less properly, but there are cases where that would be imaginable: on the PC side, firewire (or RAID) controllers are often attached to the PCI bus, which also holds the sound card. Cheap adapters and / or bad drivers can in some situations have a bad influence on DFD performance and produce sonic artefacts.
Let's go on to memory. We've seen, that we do need some memory, and our demand can grow quite a bit. If we want to use instruments with many samples, we'll end up with as many preload buffers, and they can sum up considerably. The next thing is buffer sizes: we've seen in the above theory example, that larger buffers can take some pressure from the hard disk, hence improve performance and polyphony. In fact, large buffers can to some extent compensate for slow access times. "To some extent" means, that it's not a plain "more buffer, more voices" relationship. There are even situations where raising buffer sizes helps to a certain point and then decreases performance afterwards. The "best for everyone" setting does not exist, so a little research is still required. So, how much memory do we need after all? Obviously, the answer would be "the more, the better". Especially when it comes to large libraries, RAM can become a crucial resource. Again, it largely depends on what instruments you want to use. The DFD control panel will show you how much (permanent) RAM your voice buffer configuration takes and you'll see how much the single instruments take. Adding this will give you an impression. Of course there is a lot of additional demand for RAM (the operating system itself, the sequencer, other software instruments,...). As a hint for memory equipment we could state the following:
· 512 Megabytes can be considered minimum for any decent audio system
· 1 Gigabyte is a good value for a powerful all round system and should serve well for most standard DFD applications
· 2 Gigabyte would make up a high-end system useable with large and high-end libraries.
What's left is the question of questions: "what DFD buffer sizes should I choose?" As said before, there is no general purpose answer to this. The DFD dialog provides for some predefined settings which should work well in most cases. The "normal" setting, for example, reserves 64 voice buffers @ 384kb (which makes a base memory demand of 24 MB) and sets preload buffers to 192kb. This setting should perform very well in most cases (although you might want to raise the number of reserved voices). There are other presets, which might suit your needs better, but you can of course always configure things by hand. Which would be a good idea anyway, since it largely depends on your system's RAM amount and hard disk. Raising the buffer sizes to higher values than the above mentioned ones can be necessary if you're using large high end libraries and you don't achieve high enough polyphony.
Je relance mes questions, je compte sur vous1- Pour commencer:
-Comment structurez vous vos patch ???
(par exemple pour un violon)
-Préparez vous des presets avec 1,2.... octaves?
-Sample Mono ou stéréo?
-Pour les articulations:
-splitez vous le clavier? ou bien vous faites des presets
avec un canal midi différent que vous assignez à la note
souhaité avec l'articulation correspondante?
-Quel Sampler utilisé vous?
N'hésitez pas à compléter si vous avez d'autres questions
au des suggestions.
Nous avons besoin de l'aide de tous les spécialistes pour pouvoir faire une belle synthèse
Par contre, ma dernière intervention était un peu "courte" et j'aimerais mieux m'expliquer.
- Voici un repiquage d'un vieux vinyle, entièrement fait en synthèse avec le Moog modulaire. D'après les critères actuels, ça ne claque pas vraiment, mais quelle MUSICALITE !!! .... "Mais comment fait-il ???"
EXTRAIT N°1 : Tomita / Daphnis et Chloé (Ravel)
- Un autre repiquage, même époque. Je vous laisse deviner comment ont été faites les cordes... Le son est vieillot, il n'y a pas beaucoup de variantes d'articulations et la position des pupitres est inhabituelle, mais pourtant...
EXTRAIT N° 2 : Klaus Schulze "X"
Mon idée est que la synthèse (Tomita) apporte la même magie qu'un orchestre réel, bien que l'imitation soit, de loin, imparfaite, et aussi qu'un orchestre (Schulze) n'utilise parfois que peu de moyens (articulations, virtuosité,...) pour arriver à cette magie musicale.
Mais ceci n'enlève rien au débat lancé par JD, c'est seulement une piste de réflexion...
Qui est en ligne ?
Utilisateurs parcourant ce forum : Aucun utilisateur inscrit et 8 invités