Automatisation Google Drive avec n8n : analyse d'images simplifiée
Ce workflow n8n a pour objectif d'automatiser l'analyse d'images stockées sur Google Drive, permettant ainsi aux utilisateurs de gagner du temps et d'améliorer leur efficacité dans la gestion de contenu visuel. En intégrant des outils d'édition d'images et d'analyse de données, ce processus est idéal pour les équipes marketing, les créateurs de contenu et les professionnels de la communication qui souhaitent extraire des informations pertinentes de leurs visuels. Le workflow commence par un déclencheur manuel, permettant à l'utilisateur de lancer le processus à tout moment. Ensuite, il utilise le nœud Google Drive pour récupérer l'image ciblée via son ID. Une fois l'image obtenue, le workflow procède à une analyse de couleur et à un redimensionnement de l'image, garantissant que les visuels soient optimisés pour leur utilisation. Les nœuds d'édition d'image, tels que 'Get Color Information' et 'Resize Image', sont essentiels pour préparer les données avant leur traitement. Par la suite, des nœuds comme 'Get Image Keywords' et 'Embeddings OpenAI' permettent d'extraire des mots-clés et d'effectuer des analyses sémantiques, enrichissant ainsi le contenu généré. En combinant toutes ces étapes, ce workflow offre une valeur ajoutée significative en automatisant des tâches qui, autrement, prendraient beaucoup de temps, tout en réduisant les risques d'erreurs humaines.
Workflow n8n Google Drive, édition d'images, analyse de données : vue d'ensemble
Schéma des nœuds et connexions de ce workflow n8n, généré à partir du JSON n8n.
Workflow n8n Google Drive, édition d'images, analyse de données : détail des nœuds
Inscris-toi pour voir l'intégralité du workflow
Inscription gratuite
S'inscrire gratuitementBesoin d'aide ?{
"meta": {
"instanceId": "26ba763460b97c249b82942b23b6384876dfeb9327513332e743c5f6219c2b8e"
},
"nodes": [
{
"id": "141638a4-b340-473f-a800-be7dbdcff131",
"name": "When clicking \"Test workflow\"",
"type": "n8n-nodes-base.manualTrigger",
"position": [
695,
380
],
"parameters": {},
"typeVersion": 1
},
{
"id": "6ccdaca5-f620-4afa-bed6-92f3a450687d",
"name": "Google Drive",
"type": "n8n-nodes-base.googleDrive",
"position": [
875,
380
],
"parameters": {
"fileId": {
"__rl": true,
"mode": "list",
"value": "0B43u2YYOTJR2cC1BRkptZ3N4QTk4NEtxRko5cjhKUUFyemw0",
"cachedResultUrl": "https://drive.google.com/file/d/0B43u2YYOTJR2cC1BRkptZ3N4QTk4NEtxRko5cjhKUUFyemw0/view?usp=drivesdk&resourcekey=0-UJ8EfTMMBRNVyBb6KhN2Tg",
"cachedResultName": "0B0A0255.jpeg"
},
"options": {},
"operation": "download"
},
"credentials": {
"googleDriveOAuth2Api": {
"id": "yOwz41gMQclOadgu",
"name": "Google Drive account"
}
},
"typeVersion": 3
},
{
"id": "b0c2f7a4-a336-4705-aeda-411f2518aaef",
"name": "Get Color Information",
"type": "n8n-nodes-base.editImage",
"position": [
1200,
200
],
"parameters": {
"operation": "information"
},
"typeVersion": 1
},
{
"id": "3e42b3f1-6900-4622-8c0d-2d9a27a7e1c9",
"name": "Resize Image",
"type": "n8n-nodes-base.editImage",
"position": [
1200,
580
],
"parameters": {
"width": 512,
"height": 512,
"options": {},
"operation": "resize",
"resizeOption": "onlyIfLarger"
},
"typeVersion": 1
},
{
"id": "00425bb2-289e-4a09-8fcb-52319281483c",
"name": "Default Data Loader",
"type": "@n8n/n8n-nodes-langchain.documentDefaultDataLoader",
"position": [
2300,
380
],
"parameters": {
"options": {
"metadata": {
"metadataValues": [
{
"name": "source",
"value": "={{ $('Document for Embedding').item.json.metadata.source }}"
},
{
"name": "format",
"value": "={{ $('Document for Embedding').item.json.metadata.format }}"
},
{
"name": "backgroundColor",
"value": "={{ $('Document for Embedding').item.json.metadata.backgroundColor }}"
}
]
}
}
},
"typeVersion": 1
},
{
"id": "06dbdf39-9d72-460e-a29c-1ae4e9f3552a",
"name": "Recursive Character Text Splitter",
"type": "@n8n/n8n-nodes-langchain.textSplitterRecursiveCharacterTextSplitter",
"position": [
2300,
500
],
"parameters": {
"options": {}
},
"typeVersion": 1
},
{
"id": "139cac42-c006-4c9d-8298-ade845e137a7",
"name": "Sticky Note",
"type": "n8n-nodes-base.stickyNote",
"position": [
1140,
100
],
"parameters": {
"color": 7,
"width": 372,
"height": 288,
"content": "### Get Color Channels\n[Source: https://www.pinecone.io/learn/series/image-search/color-histograms/](https://www.pinecone.io/learn/series/image-search/color-histograms/)"
},
"typeVersion": 1
},
{
"id": "9b8584ae-067c-4515-b194-32986ba3bf8b",
"name": "Sticky Note1",
"type": "n8n-nodes-base.stickyNote",
"position": [
1140,
418
],
"parameters": {
"color": 7,
"width": 376.4067897296865,
"height": 335.30166772984643,
"content": "### Generate Image Keywords\n[Source: https://www.pinecone.io/learn/series/image-search/bag-of-visual-words/](https://www.pinecone.io/learn/series/image-search/bag-of-visual-words/)\n\nNote, OpenAI Image models work best when image is resized to 512x512."
},
"typeVersion": 1
},
{
"id": "7f2c27d7-9947-42fa-aafb-78f4f95ac433",
"name": "Sticky Note2",
"type": "n8n-nodes-base.stickyNote",
"position": [
240,
540
],
"parameters": {
"color": 3,
"width": 359.1981770749933,
"height": 98.40143173756314,
"content": "⚠️ **Multimodal embedding is not designed analyze medical images for diagnostic features or disease patterns.** Please do not use Multimodal embedding for medical purposes."
},
"typeVersion": 1
},
{
"id": "cb6b4a82-db5f-41f0-94dc-6cfabe0905eb",
"name": "Combine Image Analysis",
"type": "n8n-nodes-base.merge",
"position": [
1700,
260
],
"parameters": {
"mode": "combine",
"options": {},
"combinationMode": "mergeByPosition"
},
"typeVersion": 2.1
},
{
"id": "1ba33665-3ebb-4b23-989d-eec53dfd225a",
"name": "Document for Embedding",
"type": "n8n-nodes-base.set",
"position": [
1860,
257
],
"parameters": {
"options": {},
"assignments": {
"assignments": [
{
"id": "8204b731-24e2-4993-9e6d-4cea80393580",
"name": "data",
"type": "string",
"value": "=## keywords\\n\n{{ $json.content }}\\n\n## color information:\\n\n{{ JSON.stringify($json[\"Channel Statistics\"]) }}"
},
{
"id": "ca49cccf-ea4e-4362-bf49-ac836c8758d3",
"name": "metadata",
"type": "object",
"value": "={ \"format\": \"{{ $json.format }}\", \"backgroundColor\": \"{{ $json[\"Background Color\"] }}\", \"source\": \"{{ $binary.data.fileName }}\" } "
}
]
}
},
"typeVersion": 3.3
},
{
"id": "5d01a2fd-0190-48fc-b588-d5872c5cd793",
"name": "Sticky Note3",
"type": "n8n-nodes-base.stickyNote",
"position": [
640,
250.0169327052916
],
"parameters": {
"color": 7,
"width": 418.6907913057789,
"height": 316.7698949693208,
"content": "## 1. Get the Source Image\nIn this demo, we just need an image file. We'll pull an image from google drive but you can use all input trigger or source you prefer."
},
"typeVersion": 1
},
{
"id": "4c9825f3-6a2b-4fd2-bdb1-e49f8d947e7a",
"name": "Sticky Note4",
"type": "n8n-nodes-base.stickyNote",
"position": [
1098.439755647174,
-145.1609149026466
],
"parameters": {
"color": 7,
"width": 462.52060804115854,
"height": 938.3723985625845,
"content": "## 2. Image Embedding Methods\n[Read more about working with images in n8n](https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-base.editimage)\n\nThere are a [myriad of image embedding techniques](https://www.pinecone.io/learn/series/image-search/) some which involve specialised models and some which do a simplified image-to-text representation.\nIn this demo, we'll use the simplified text representation methods: collecting color channel information and using Multimodal LLMs to produce keywords for the image. Together, these will form the document we'll embed to represent our image for search."
},
"typeVersion": 1
},
{
"id": "e4035987-16c0-4d03-9e20-5f2042a6a020",
"name": "Sticky Note5",
"type": "n8n-nodes-base.stickyNote",
"position": [
1600,
120
],
"parameters": {
"color": 7,
"width": 418.6907913057789,
"height": 343.6004071339855,
"content": "## 3. Generate Embedding Doc\nIt is important to define your metadata for later filtering and retrieval purposes.\n\n"
},
"typeVersion": 1
},
{
"id": "91fe4c5c-c063-48e2-b248-801c11880c69",
"name": "Sticky Note6",
"type": "n8n-nodes-base.stickyNote",
"position": [
2060,
-11.068945113406585
],
"parameters": {
"color": 7,
"width": 532.5269726975372,
"height": 665.9365418117011,
"content": "## 3. Store in Vector Store\n[Read more about vector stores](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreinmemory)\n\nOnce our document is ready, we can just insert into any vector store to make it ready for searching. When searching, be sure to defined the same vector store index used here!\nNote: Metadata is defined in the document loader which must be mapped manually.\n\n"
},
"typeVersion": 1
},
{
"id": "6e8ffa06-ddec-463a-b8d6-581ad7095398",
"name": "Embeddings OpenAI1",
"type": "@n8n/n8n-nodes-langchain.embeddingsOpenAi",
"position": [
2680,
547
],
"parameters": {
"options": {}
},
"credentials": {
"openAiApi": {
"id": "8gccIjcuf3gvaoEr",
"name": "OpenAi account"
}
},
"typeVersion": 1
},
{
"id": "3dea73b2-6aa1-4158-945e-a5d6bea65244",
"name": "Sticky Note7",
"type": "n8n-nodes-base.stickyNote",
"position": [
2620,
200
],
"parameters": {
"color": 7,
"width": 400.96585774172854,
"height": 512.739000439197,
"content": "## 4. Try it out!\n[Read more about vector stores](https://docs.n8n.io/integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreinmemory)\n\nHere's a quick test to use a simple text prompt to search for the image. Next step would be to implement image-to-image search by using the \"Embedding Doc\" to search rather to store in the vector database.\n\n"
},
"typeVersion": 1
},
{
"id": "f6a543d4-df3b-456c-8f85-4dca29029b55",
"name": "Sticky Note8",
"type": "n8n-nodes-base.stickyNote",
"position": [
240,
140
],
"parameters": {
"width": 359.6648027457353,
"height": 384.6280362222034,
"content": "## Try It Out!\n### This workflow does the following:\n* Downloads a selected image from Google Drive.\n* Extracts colour channel information from the image.\n* Generates semantic keywords of the iamge using OpenAI vision model.\n* Combines extracted and generated data to create an embedding document for the image.\n* Inserts this document into a vector store to allow for vector search on the original image. \n\n### Need Help?\nJoin the [Discord](https://discord.com/invite/XPKeKXeB7d) or ask in the [Forum](https://community.n8n.io/)!\n\nHappy Hacking!"
},
"typeVersion": 1
},
{
"id": "1b1e8568-3779-4ee1-b520-517246d9bf86",
"name": "Get Image Keywords",
"type": "@n8n/n8n-nodes-langchain.openAi",
"position": [
1360,
580
],
"parameters": {
"text": "Extract all possible semantic keywords which describe the image. Be comprehensive and be sure to identify subjects (if applicable) such as biological and non-biological objects, lightning, mood, tone, color, special effects, camera and/or techniques used if known. Respond with a comma-separated list.",
"options": {
"detail": "high"
},
"resource": "image",
"inputType": "base64",
"operation": "analyze"
},
"credentials": {
"openAiApi": {
"id": "8gccIjcuf3gvaoEr",
"name": "OpenAi account"
}
},
"typeVersion": 1.3
},
{
"id": "724acae9-75d2-4421-b5a3-b920f7bda825",
"name": "In-Memory Vector Store",
"type": "@n8n/n8n-nodes-langchain.vectorStoreInMemory",
"position": [
2180,
200
],
"parameters": {
"mode": "insert",
"memoryKey": "image_embeddings"
},
"typeVersion": 1
},
{
"id": "52afd512-0d55-4ae3-9377-4cb324c571a8",
"name": "Embeddings OpenAI",
"type": "@n8n/n8n-nodes-langchain.embeddingsOpenAi",
"position": [
2180,
420
],
"parameters": {
"options": {}
},
"credentials": {
"openAiApi": {
"id": "8gccIjcuf3gvaoEr",
"name": "OpenAi account"
}
},
"typeVersion": 1
},
{
"id": "c769f279-22ef-4cb1-aef3-9089bb92a0a4",
"name": "Search for Image",
"type": "@n8n/n8n-nodes-langchain.vectorStoreInMemory",
"position": [
2680,
387
],
"parameters": {
"mode": "load",
"prompt": "student having fun",
"memoryKey": "image_embeddings"
},
"typeVersion": 1
}
],
"pinData": {},
"connections": {
"Google Drive": {
"main": [
[
{
"node": "Get Color Information",
"type": "main",
"index": 0
},
{
"node": "Resize Image",
"type": "main",
"index": 0
}
]
]
},
"Resize Image": {
"main": [
[
{
"node": "Get Image Keywords",
"type": "main",
"index": 0
}
]
]
},
"Embeddings OpenAI": {
"ai_embedding": [
[
{
"node": "In-Memory Vector Store",
"type": "ai_embedding",
"index": 0
}
]
]
},
"Embeddings OpenAI1": {
"ai_embedding": [
[
{
"node": "Search for Image",
"type": "ai_embedding",
"index": 0
}
]
]
},
"Get Image Keywords": {
"main": [
[
{
"node": "Combine Image Analysis",
"type": "main",
"index": 1
}
]
]
},
"Default Data Loader": {
"ai_document": [
[
{
"node": "In-Memory Vector Store",
"type": "ai_document",
"index": 0
}
]
]
},
"Get Color Information": {
"main": [
[
{
"node": "Combine Image Analysis",
"type": "main",
"index": 0
}
]
]
},
"Combine Image Analysis": {
"main": [
[
{
"node": "Document for Embedding",
"type": "main",
"index": 0
}
]
]
},
"Document for Embedding": {
"main": [
[
{
"node": "In-Memory Vector Store",
"type": "main",
"index": 0
}
]
]
},
"When clicking \"Test workflow\"": {
"main": [
[
{
"node": "Google Drive",
"type": "main",
"index": 0
}
]
]
},
"Recursive Character Text Splitter": {
"ai_textSplitter": [
[
{
"node": "Default Data Loader",
"type": "ai_textSplitter",
"index": 0
}
]
]
}
}
}Workflow n8n Google Drive, édition d'images, analyse de données : pour qui est ce workflow ?
Ce workflow s'adresse principalement aux équipes marketing, aux créateurs de contenu et aux professionnels de la communication qui utilisent Google Drive pour gérer des images. Il est conçu pour des utilisateurs ayant un niveau technique intermédiaire, souhaitant automatiser leurs processus d'analyse d'images.
Workflow n8n Google Drive, édition d'images, analyse de données : problème résolu
Ce workflow résout le problème de la gestion manuelle des images et de l'extraction d'informations pertinentes. Il élimine les frustrations liées à la perte de temps lors de l'analyse d'images, réduit les risques d'erreurs humaines dans le traitement des données visuelles et permet aux utilisateurs de se concentrer sur des tâches à plus forte valeur ajoutée. Grâce à cette automatisation, les utilisateurs peuvent obtenir des analyses précises et rapides de leurs visuels, améliorant ainsi leur productivité.
Workflow n8n Google Drive, édition d'images, analyse de données : étapes du workflow
Étape 1 : Le workflow est déclenché manuellement par l'utilisateur.
- Étape 1 : Le nœud Google Drive récupère l'image à analyser via son ID.
- Étape 2 : L'image est ensuite analysée pour extraire des informations de couleur grâce au nœud 'Get Color Information'.
- Étape 3 : L'image est redimensionnée pour répondre aux besoins spécifiques via le nœud 'Resize Image'.
- Étape 4 : Les mots-clés associés à l'image sont extraits à l'aide du nœud 'Get Image Keywords'.
- Étape 5 : Les données sont enrichies avec des embeddings via les nœuds 'Embeddings OpenAI' et 'Embeddings OpenAI1'.
- Étape 6 : Les résultats sont combinés et préparés pour une utilisation ultérieure.
Workflow n8n Google Drive, édition d'images, analyse de données : guide de personnalisation
Pour personnaliser ce workflow, commencez par modifier l'ID du fichier dans le nœud Google Drive pour pointer vers l'image que vous souhaitez analyser. Vous pouvez également ajuster les paramètres de redimensionnement dans le nœud 'Resize Image' pour répondre à vos besoins spécifiques. Si vous souhaitez intégrer d'autres outils, envisagez de remplacer ou d'ajouter des nœuds d'édition d'images ou d'analyse de données. Assurez-vous de sécuriser le flux en vérifiant les autorisations d'accès à Google Drive et en monitorant les performances du workflow pour garantir une exécution fluide.