I have a feeling come Jan 1 many more! Intel MKL for machine learning will be out.wowzaaaaaa
use cases?
There is a cache mode (think giant L3 cache of 16GB), Flat Mode where the 16GB is added to system memory that is addressable which you can see in the above screenshot. You can also do a hybrid mode which works as a combination.Any memory segmentation? How does the hierarchy with the MCDRAM work?
Docker does work. I fired up the Monero mining container on the KNL. Easy.Silvermont cores right? They could in theory be used for virtualization? Maybe docker containers?
[url=http://vizdoom.cs.put.edu.pl/competition-cig-2016]"Ai/machine learning"Why would you want to play games on such a massively parallel system?
The various MCDRAM modes allow a few different options. You can use standard RAM but then use the MCDRAM as a cache just an example here.How will it deal with 'standard' RAM when program needs more memory than 7210 has? This situation might be a bottleneck for system?
Any ideas where/how to test performance on specific problem?