Coral TPU, how to install?

Notice: Page may contain affiliate links for which we may earn a small commission through services like Amazon Affiliates or Skimlinks.

Caennanu

Member
May 18, 2021
118
8
18
Gday all,

Not entirely sure where to place this question. So since its going into my Unraid box, i figured i would post it here.

On my unraid i have a VM with ubuntu 20.04LTS.
This VM serves as my CCTV server, having Shinobi installed with a GPU passed through for tensorflow object detection.

Now... i have finally gotten my hands on an m.2 Coral TPU, i'm wanting to swap out the GPU for the TPU. (i am still waiting for my Asus Hyper m.2 card to free up m.2 slots for the TPU, so this is preliminary). The hardware bit isn't an issue, and i'm geussing neither will be the passing through.

So... i will need to use tensorflow lite for a TPU. I can find somewhat decent guides on how to install tensorflow lite, but i cannot find anything regarding how to enable TPU acceleration for it. And i'm hoping someone here can help me in the right direction.

The question basically is 2 fold now...
1. How do i 'migrate' from Tensorflow to Tensorflow lite. (and enable that plugin in Shinobi)
2. How do i tell Tensorflow lite to use the TPU for acceleration?
 

kpfleming

Active Member
Dec 28, 2021
383
205
43
Pelham NY USA
Neither of your questions are about installation of the TPU itself :)

I suspect the best place to get answers about configuration of Tensorflow will be a forum or mailing list dedicated to Tensorflow.
 

Caennanu

Member
May 18, 2021
118
8
18
Neither of your questions are about installation of the TPU itself :)

I suspect the best place to get answers about configuration of Tensorflow will be a forum or mailing list dedicated to Tensorflow.
Well that is true. It is about making software use the TPU.

Alright, makes sence... Do you know any, without an elitist culture, that will take the time to teach a near linux noob how to do things? :d
 

Caennanu

Member
May 18, 2021
118
8
18
Not sure about that, but the second link in a Google search for 'tensorflow coral tpu' is this: Run inference on the Edge TPU with Python | Coral

That shows that it is quite simple to make an existing TF Lite application use the TPU. As to how to make Shinobi do that, I haven't got a clue :)
Yeah i been on their official site. The new installation of the packages and drivers and such seems fairly well documented. But i know during regular tensorflow install, you get a prompt if i want to enable GPU support. I can't recall if there is a prompt where i can select 'TPU support'.

Nor can i find information on how to switch between tensorflow regular and the lite version..
 

Sean Ho

seanho.com
Nov 19, 2019
768
352
63
Vancouver, BC
seanho.com
Regarding actual installation of the TPU, since you mentioned using an ASUS Hyper to free up m.2 slots for the TPU, note that the single-TPU card is 2230 A+E-key, and the dual-TPU card is 2230 E-key; each TPU uses a single PCIe x1 link. You may be able to look on AliExpress or similar for an adapter from M-key to E-key that passes through an x1 link. Or use the m.2 slot for Wi-Fi if you have one.
 

Caennanu

Member
May 18, 2021
118
8
18
Regarding actual installation of the TPU, since you mentioned using an ASUS Hyper to free up m.2 slots for the TPU, note that the single-TPU card is 2230 A+E-key, and the dual-TPU card is 2230 E-key; each TPU uses a single PCIe x1 link. You may be able to look on AliExpress or similar for an adapter from M-key to E-key that passes through an x1 link. Or use the m.2 slot for Wi-Fi if you have one.
I am aware off the formfactor.
I need to free up the M.2 slots on my mainboard, as they are populated with 2 m.2 drives.
The drives will go on the Asus Hyper, and i will have the ability to set the onboard m.2 slot to 2x2 or 2x1 or whatever i need.
 

Caennanu

Member
May 18, 2021
118
8
18
Why just not get the TPU in USB favor? USB Accelerator | Coral
Well, that has many factors, none of which to do with the actual question. But since you're asking, it would be rude not to answer.

1. My case is a 4u in a rack, i would hate to hang a usb device on the outside, it just looks nasty (even tho its a homelab).
2. Using an USB and sticking it on the inside, would require me to add an usb extension card with an internal port, defeating the purpose. Ill be using a slot anyway, and i can expand my m.2 storage this way. (or i would have to drill a hole in the back to connect the usb port to an external usb, which are limited)
3. price, an m.2 tpu cost me 50 bucks, while an usb version would cost me 200 in current market.
4. heat, an usb coral accelerator gets a bit warm, the 4u case has decent airflow, where the outside does not. its not a rack located in an climate controlled room
5. throughput, the usb accelerator 'requires' usb3. it wouldn't be as efficient on the usb 2 slots i do have available
6. cables, they simply make a mess
 
  • Like
Reactions: BoredSysadmin

epicurean

Active Member
Sep 29, 2014
785
80
28
The best documentation for use of coral TPU is with Frigate NVR , and in the Home Assistant community forums.
I could not get hold of the USB version of Coral , but managed to get 2 x mini pcie versions. I put both of them in a PCI-E mini pcie adapter ,and run Frigate (docker) inside an ubuntu VM under proxmox - which made it easy to just passthrough the 2 PCIE devices to the VM.
I do not use HA, but Openhab instead. But the intergration with HA is excellent with Frigate
 
  • Like
Reactions: abq

Caennanu

Member
May 18, 2021
118
8
18
The best documentation for use of coral TPU is with Frigate NVR , and in the Home Assistant community forums.
I could not get hold of the USB version of Coral , but managed to get 2 x mini pcie versions. I put both of them in a PCI-E mini pcie adapter ,and run Frigate (docker) inside an ubuntu VM under proxmox - which made it easy to just passthrough the 2 PCIE devices to the VM.
I do not use HA, but Openhab instead. But the intergration with HA is excellent with Frigate
Alright, that sounds awesome.
Yeah i have an dual edge TPU. Ill be passing atleast 1 to shinobi, the other i don't know yet.
Do you perhaps have a link / guide you followed for the frigate install?
I'm geussing it would be fairly similar for ubuntu 20.04lts. as docker is basically linux.
 

Caennanu

Member
May 18, 2021
118
8
18
Awesome, this was exactly what i was looking for! or rather, as close as it can get appearantly :D

Yes, use ubuntu 20.04 docker , NOT 22 as it does not have all the necessary drivers yet.
I'm on 20.04, not planning on switching. no worries there!
 

UhClem

just another Bozo on the bus
Jun 26, 2012
435
249
43
NH, USA
... I need to free up the M.2 slots on my mainboard, as they are populated with 2 m.2 drives.
The drives will go on the Asus Hyper, and i will have the ability to set the onboard m.2 slot to 2x2 or 2x1 or whatever i need.
I'm very curious how you will do that. ??
Does your motherboard/BIOS actually offer that level of control?
Or, is your mobo capable of auto-configuring its M.2 slot(s)?
[ ... and, are you sure?:)]
 
  • Like
Reactions: abq

Caennanu

Member
May 18, 2021
118
8
18
I'm very curious how you will do that. ??
Does your motherboard/BIOS actually offer that level of control?
Or, is your mobo capable of auto-configuring its M.2 slot(s)?
[ ... and, are you sure?:)]
The perks of using an epyc cpu. All 7 PCIe slots support biffercation. So all can run at 4*4*4*4.
And so can my m.2 slots. X4, 2*2 or 2*1.
 
  • Like
Reactions: UhClem

UhClem

just another Bozo on the bus
Jun 26, 2012
435
249
43
NH, USA
The perks of using an epyc cpu. All 7 PCIe slots support biffercation. So all can run at 4*4*4*4.
That, I expected; but ...
And so can my m.2 slots. X4, 2*2 or 2*1.
this, I'm very impressed! (Thanks for shining a light.)
[I hadn't delved very deep in the EPYC platform architecture ... NICE!]
What model is your mobo?
[I wonder how common/standard it is for (EPYC) Mobo/BIOS to offer the full gamut of PCIe configurability (up to the max of 8 devices per x16 link) for (onboard) NVMe connectors--M.2, sff-8654 (4i & 8i) etc.]
 

Caennanu

Member
May 18, 2021
118
8
18
That, I expected; but ...

this, I'm very impressed! (Thanks for shining a light.)
[I hadn't delved very deep in the EPYC platform architecture ... NICE!]
What model is your mobo?
[I wonder how common/standard it is for (EPYC) Mobo/BIOS to offer the full gamut of PCIe configurability (up to the max of 8 devices per x16 link) for (onboard) NVMe connectors--M.2, sff-8654 (4i & 8i) etc.]
Im using the asrock rack epyc d8-2t. The rome d8-2t is the bigger brother sporting 7 full x16 slots.

The nvme dont actually have the setting, but it does work. . .
 

Caennanu

Member
May 18, 2021
118
8
18
What does that mean? If there's no bifurcation setting, you can't use 2*x2 or 2x*1 in those slots.
What i mean is, that the dual edge tpu recognizes both tpu's individually while on the same slot.
so i'm geussing its auto sensing the bifurcation need? i donno.

And i also know why i remember having the setting. As before i got this board back from RMA, i had a development bios sent to me for testing. That one did have the option. So that was my bad.
 

kpfleming

Active Member
Dec 28, 2021
383
205
43
Pelham NY USA
Since the TPU chips are tiny, looking at the picture of the dual-TPU M.2 card I suspect the larger chips are some sort of PCIe bridge or switch chips. That would allow the card to split the lanes itself, without help from the host system. I can't imagine it would be possible to 'auto-sense' the need for bifurcation and then split the lanes on the host side.