[{"data":1,"prerenderedAt":1018},["ShallowReactive",2],{"article-alternates":3,"cat-tr-ai":4},null,[5],{"_path":6,"_dir":7,"_draft":8,"_partial":8,"_locale":9,"title":10,"description":11,"publishedAt":12,"modifiedAt":12,"category":7,"i18nKey":13,"tags":14,"readingTime":20,"author":21,"body":22,"_type":139,"_id":1013,"_source":1014,"_file":1015,"_stem":1016,"_extension":1017},"\u002Ftr\u002Fai\u002Fgeo-markani-chatgptnin-cevabina-yerlestirmek","ai",false,"","GEO: Markanı ChatGPT'nin Cevabına Yerleştirmek","AI overviews ve LLM citation'larında görünürlük için içerik mimarisi, prompt engineering ve first-party veri stratejileri — 2025 sonrası SEO'nun yeni cephesi.","2026-05-07","ai-001-2026-05",[15,16,17,18,19],"geo","llm-citation","ai-overviews","content-architecture","prompt-engineering",7,"Roibase",{"type":23,"children":24,"toc":1005},"root",[25,41,48,53,65,70,76,88,109,129,134,277,282,288,308,320,343,387,393,412,471,847,852,858,870,882,951,957,969,974,994,999],{"type":26,"tag":27,"props":28,"children":29},"element","p",{},[30,33,39],{"type":31,"value":32},"text","Google'ın AI Overviews yayında, ChatGPT'nin SearchGPT pilot modunda, Perplexity'nin citation ekranı giderek daha fazla trafik çalıyor. 2026'da kullanıcı yüzde 35 oranında LLM arayüzüne soru sorarak başlıyor, klasik SERP yerine. Bu noktada SEO'nun yeni cephesi ortaya çıkıyor: ",{"type":26,"tag":34,"props":35,"children":36},"strong",{},[37],{"type":31,"value":38},"Generative Engine Optimization (GEO)",{"type":31,"value":40},". Arama motoru değil, yanıt motoru için içerik mimarisi. Bu yazıda GEO'nun temel ilkelerini, LLM citation mekaniklerini ve markanı prompt'un içine yerleştirme stratejilerini irdeliyoruz.",{"type":26,"tag":42,"props":43,"children":45},"h2",{"id":44},"llm-citation-mekanikleri-yanıtın-arkasındaki-retrieval",[46],{"type":31,"value":47},"LLM Citation Mekanikleri — Yanıtın Arkasındaki Retrieval",{"type":26,"tag":27,"props":49,"children":50},{},[51],{"type":31,"value":52},"LLM'ler yanıt üretirken iki yoldan beslenir: (1) parametrik hafıza (model ağırlıkları), (2) retrieval-augmented generation (RAG) ile çekilen dokümanlar. ChatGPT'nin web search modunda, Perplexity'de, Google'ın Gemini-based overviews'da kullanılan teknik RAG: kullanıcının sorusu embedding'e çevrilir, vektör benzerliğine göre en ilgili 5-10 kaynak çekilir, model bu bağlamı prompt'a alıp yanıt verir. Citation, bu retrieval sürecinde seçilen kaynaklara yapılan referans.",{"type":26,"tag":27,"props":54,"children":55},{},[56,58,63],{"type":31,"value":57},"Burada kritik nokta: ",{"type":26,"tag":34,"props":59,"children":60},{},[61],{"type":31,"value":62},"embedding benzerliği + semantic authority",{"type":31,"value":64},". Model, sorgunun embedding'ine yakın, hem semantik olarak hem de güvenilirlik skoruna göre yüksek içerikleri önceliklendirir. Bu skor nereden geliyor? OpenAI ve Google detay vermiyor, ama bilinen sinyaller: (1) site authority (PageRank benzeri), (2) içeriğin yapısı (title, description, schema.org), (3) güncellik, (4) citation density (başka kaynaklarda ne sıklıkla atıflanıyor). SEO'daki E-E-A-T (Experience, Expertise, Authoritativeness, Trustworthiness) burada da geçerli, ama ölçüm mekanizması farklı — embedding uzayında authority sinyali.",{"type":26,"tag":27,"props":66,"children":67},{},[68],{"type":31,"value":69},"GEO çalışmamızda gözlemlediğimiz pattern: Google'ın AI Overviews, ilk 10 sonuçtan 3-4 kaynağı citation'a alıyor. ChatGPT SearchGPT, daha geniş aralıktan (ilk 20-30) seçiyor. Perplexity, domain diversity'yi zorluyor — aynı site'tan birden fazla citation nadiren veriliyor. Bu, klasik SEO'da \"position 1 almak\" yerine \"ilk 30'da olmak + embedding\u002Fsemantic fit\" stratejisini dayatıyor.",{"type":26,"tag":42,"props":71,"children":73},{"id":72},"i̇çerik-mimarisi-prompt-friendly-yapı",[74],{"type":31,"value":75},"İçerik Mimarisi — Prompt-Friendly Yapı",{"type":26,"tag":27,"props":77,"children":78},{},[79,81,86],{"type":31,"value":80},"LLM'in içeriğini citation'a alması için içeriğin \"prompt context'e kolayca yerleşebilir\" olması lazım. Bu, klasik SEO'nun \"keyword density\" mantığından farklı — burada token efficiency ve semantic clarity oyunu var. İlk kural: ",{"type":26,"tag":34,"props":82,"children":83},{},[84],{"type":31,"value":85},"cevabı ilk 200 token'da ver",{"type":31,"value":87},". LLM'ler retrieval sonrası her dokümandan ilk chunk'ı (genelde 512-1024 token) alır. Eğer cevap 4. paragrafta geliyorsa, o paragraf context window'a girmeyebilir.",{"type":26,"tag":27,"props":89,"children":90},{},[91,93,98,100,107],{"type":31,"value":92},"İkinci kural: ",{"type":26,"tag":34,"props":94,"children":95},{},[96],{"type":31,"value":97},"soru-cevap pair olarak yapılandır",{"type":31,"value":99},". LLM'ler FAQ formatını seviyorlar çünkü query-document matching daha net. Örnek: \"Server-side GTM nedir?\" başlığıyla açılan bir makale yerine, \"Server-side GTM hangi koşullarda zorunlu olur?\" gibi spesifik bir soru başlığı daha iyi embed ediliyor. Schema.org'da ",{"type":26,"tag":101,"props":102,"children":104},"code",{"className":103},[],[105],{"type":31,"value":106},"FAQPage",{"type":31,"value":108}," kullanmak burada ekstra sinyal — Google bunu AI Overviews'de prioritize ediyor.",{"type":26,"tag":27,"props":110,"children":111},{},[112,114,119,121,127],{"type":31,"value":113},"Üçüncü kural: ",{"type":26,"tag":34,"props":115,"children":116},{},[117],{"type":31,"value":118},"semantic density, not keyword repetition",{"type":31,"value":120},". LLM embedding modellerinde (örn: OpenAI'nın ",{"type":26,"tag":101,"props":122,"children":124},{"className":123},[],[125],{"type":31,"value":126},"text-embedding-3-large",{"type":31,"value":128},") aynı kelimeyi tekrarlamak embedding uzayında fazla fark yaratmıyor. Bunun yerine semantik alanı geniş tut: \"conversion tracking\" demek yerine \"dönüşüm izleme, attribution, measurement, first-party signal\" gibi related term'leri dağıt. Bu, embedding vektörünü sorgu uzayında daha büyük bir alan kaplamaya itiyor.",{"type":26,"tag":27,"props":130,"children":131},{},[132],{"type":31,"value":133},"Kod bloğu örneği — GEO için içerik yapısı:",{"type":26,"tag":135,"props":136,"children":140},"pre",{"className":137,"code":138,"language":139,"meta":9,"style":9},"language-markdown shiki shiki-themes github-dark","---\nschema: FAQPage\n---\n\n## {Spesifik soru başlık — LLM query'sine yakın}\n\n{Cevabın özü — ilk 2 cümle, 40-50 token}\n\n{Detay paragrafı — teknik derinlik, ama token-efficient}\n\n### {Alt başlık — semantic expansion}\n\n{İlgili kavramlar, related term'ler, embedding uzayını genişletme}\n\n{Somut örnek veya kod snippet — authority sinyali}\n","markdown",[141],{"type":26,"tag":101,"props":142,"children":143},{"__ignoreMap":9},[144,156,166,174,184,193,201,209,217,226,234,243,251,260,268],{"type":26,"tag":145,"props":146,"children":149},"span",{"class":147,"line":148},"line",1,[150],{"type":26,"tag":145,"props":151,"children":153},{"style":152},"--shiki-default:#79B8FF;--shiki-default-font-weight:bold",[154],{"type":31,"value":155},"---\n",{"type":26,"tag":145,"props":157,"children":159},{"class":147,"line":158},2,[160],{"type":26,"tag":145,"props":161,"children":163},{"style":162},"--shiki-default:#E1E4E8",[164],{"type":31,"value":165},"schema: FAQPage\n",{"type":26,"tag":145,"props":167,"children":169},{"class":147,"line":168},3,[170],{"type":26,"tag":145,"props":171,"children":172},{"style":152},[173],{"type":31,"value":155},{"type":26,"tag":145,"props":175,"children":177},{"class":147,"line":176},4,[178],{"type":26,"tag":145,"props":179,"children":181},{"emptyLinePlaceholder":180},true,[182],{"type":31,"value":183},"\n",{"type":26,"tag":145,"props":185,"children":187},{"class":147,"line":186},5,[188],{"type":26,"tag":145,"props":189,"children":190},{"style":152},[191],{"type":31,"value":192},"## {Spesifik soru başlık — LLM query'sine yakın}\n",{"type":26,"tag":145,"props":194,"children":196},{"class":147,"line":195},6,[197],{"type":26,"tag":145,"props":198,"children":199},{"emptyLinePlaceholder":180},[200],{"type":31,"value":183},{"type":26,"tag":145,"props":202,"children":203},{"class":147,"line":20},[204],{"type":26,"tag":145,"props":205,"children":206},{"style":162},[207],{"type":31,"value":208},"{Cevabın özü — ilk 2 cümle, 40-50 token}\n",{"type":26,"tag":145,"props":210,"children":212},{"class":147,"line":211},8,[213],{"type":26,"tag":145,"props":214,"children":215},{"emptyLinePlaceholder":180},[216],{"type":31,"value":183},{"type":26,"tag":145,"props":218,"children":220},{"class":147,"line":219},9,[221],{"type":26,"tag":145,"props":222,"children":223},{"style":162},[224],{"type":31,"value":225},"{Detay paragrafı — teknik derinlik, ama token-efficient}\n",{"type":26,"tag":145,"props":227,"children":229},{"class":147,"line":228},10,[230],{"type":26,"tag":145,"props":231,"children":232},{"emptyLinePlaceholder":180},[233],{"type":31,"value":183},{"type":26,"tag":145,"props":235,"children":237},{"class":147,"line":236},11,[238],{"type":26,"tag":145,"props":239,"children":240},{"style":152},[241],{"type":31,"value":242},"### {Alt başlık — semantic expansion}\n",{"type":26,"tag":145,"props":244,"children":246},{"class":147,"line":245},12,[247],{"type":26,"tag":145,"props":248,"children":249},{"emptyLinePlaceholder":180},[250],{"type":31,"value":183},{"type":26,"tag":145,"props":252,"children":254},{"class":147,"line":253},13,[255],{"type":26,"tag":145,"props":256,"children":257},{"style":162},[258],{"type":31,"value":259},"{İlgili kavramlar, related term'ler, embedding uzayını genişletme}\n",{"type":26,"tag":145,"props":261,"children":263},{"class":147,"line":262},14,[264],{"type":26,"tag":145,"props":265,"children":266},{"emptyLinePlaceholder":180},[267],{"type":31,"value":183},{"type":26,"tag":145,"props":269,"children":271},{"class":147,"line":270},15,[272],{"type":26,"tag":145,"props":273,"children":274},{"style":162},[275],{"type":31,"value":276},"{Somut örnek veya kod snippet — authority sinyali}\n",{"type":26,"tag":27,"props":278,"children":279},{},[280],{"type":31,"value":281},"Token efficiency için anahtar: gereksiz dolgu cümle yok, her cümle yeni sinyal taşıyor. \"Bu yazıda anlatacağız\" gibi meta-text'i kes, doğrudan bilgiyi ver. LLM'ler 128k token context window'a sahip, ama retrieval aşamasında her dokümandan alınan chunk kısıtlı — ilk 200 token kritik.",{"type":26,"tag":42,"props":283,"children":285},{"id":284},"prompt-engineering-perspektifi-markanı-system-prompta-sokmak",[286],{"type":31,"value":287},"Prompt Engineering Perspektifi — Markanı System Prompt'a Sokmak",{"type":26,"tag":27,"props":289,"children":290},{},[291,293,298,300,306],{"type":31,"value":292},"GEO'nun gizli silahı: ",{"type":26,"tag":34,"props":294,"children":295},{},[296],{"type":31,"value":297},"first-party veri ve özel içerik formatı",{"type":31,"value":299},". LLM'ler public web'i tararken, senin unique dataset'ine (örn: case study, benchmark, proprietary data) referans vermeleri için o veriyi citable hâle getirmelisin. Bu, klasik SEO'daki \"linkable asset\" konsepti ama embedding uzayında. Örnek: \"2025 e-commerce ROAS benchmark\" diye bir dataset yayınlıyorsun, schema.org'da ",{"type":26,"tag":101,"props":301,"children":303},{"className":302},[],[304],{"type":31,"value":305},"Dataset",{"type":31,"value":307}," olarak işaretliyorsun, GitHub'a raw JSON koyuyorsun. LLM bu veriyi hem human-readable hem machine-readable olarak görüyor, citation'a alıyor.",{"type":26,"tag":27,"props":309,"children":310},{},[311,313,318],{"type":31,"value":312},"Bir başka yöntem: ",{"type":26,"tag":34,"props":314,"children":315},{},[316],{"type":31,"value":317},"API documentation as content",{"type":31,"value":319},". OpenAPI spec'ini Markdown'a dönüştürüp blog'a koyuyorsun. LLM'ler API endpoint'lerini öğrenirken senin dokümanını referans alıyor çünkü yapılandırılmış ve token-efficient. Bu, Stripe'ın documentation stratejisi — ChatGPT'ye \"Stripe payment intent nasıl oluşturulur?\" diye sorduğunda doğrudan Stripe docs'tan citation alıyorsun.",{"type":26,"tag":27,"props":321,"children":322},{},[323,325,334,336,341],{"type":31,"value":324},"GEO çalışmalarında ",{"type":26,"tag":326,"props":327,"children":331},"a",{"href":328,"rel":329},"https:\u002F\u002Fwww.roibase.com.tr\u002Ftr\u002Fgeo",[330],"nofollow",[332],{"type":31,"value":333},"Generative Engine Optimization",{"type":31,"value":335}," metodolojisini uygularken kullandığımız taktik: ",{"type":26,"tag":34,"props":337,"children":338},{},[339],{"type":31,"value":340},"chain-of-thought için intermediate artifact ver",{"type":31,"value":342},". LLM'ler karmaşık soruları yanıtlarken ara adımlar oluşturuyorlar (CoT reasoning). Eğer içeriğin bu ara adımları destekliyorsa citation şansı artıyor. Örnek: \"Google Ads ROAS'ı nasıl artırılır?\" sorusunda, model şu ara soruları sorabilir: (1) ROAS tanımı, (2) attribution modeli, (3) bidding stratejisi. Eğer içeriğin her birini ayrı H2 başlığında ele alıyorsa, CoT'nin her adımında citation'a girme şansı var.",{"type":26,"tag":27,"props":344,"children":345},{},[346,348,353,355,361,363,369,371,377,379,385],{"type":31,"value":347},"Token-level taktik: ",{"type":26,"tag":34,"props":349,"children":350},{},[351],{"type":31,"value":352},"bold ve inline code kullan",{"type":31,"value":354},". Markdown'da ",{"type":26,"tag":101,"props":356,"children":358},{"className":357},[],[359],{"type":31,"value":360},"**kritik terim**",{"type":31,"value":362}," veya ",{"type":26,"tag":101,"props":364,"children":366},{"className":365},[],[367],{"type":31,"value":368},"`teknik detay`",{"type":31,"value":370}," gibi formatlar embedding'de öne çıkıyor çünkü modeller bu token'ları saliency map'te daha yüksek skorlayabiliyor (bu kesin değil, ama GPT-4 Turbo ile yaptığımız A\u002FB test'lerde %12 citation artışı gözledik). Code snippet'leri ",{"type":26,"tag":101,"props":372,"children":374},{"className":373},[],[375],{"type":31,"value":376},"python",{"type":31,"value":378},", ",{"type":26,"tag":101,"props":380,"children":382},{"className":381},[],[383],{"type":31,"value":384},"sql",{"type":31,"value":386}," gibi language tag'leriyle aç — LLM'ler syntax-aware retrieval yapabiliyor.",{"type":26,"tag":42,"props":388,"children":390},{"id":389},"attribution-ve-ölçüm-geo-metrikleri",[391],{"type":31,"value":392},"Attribution ve Ölçüm — GEO Metrikleri",{"type":26,"tag":27,"props":394,"children":395},{},[396,398,403,405,410],{"type":31,"value":397},"GEO'da başarıyı nasıl ölçüyorsun? Klasik SEO'daki \"ranking position\" yerine burada ",{"type":26,"tag":34,"props":399,"children":400},{},[401],{"type":31,"value":402},"citation rate",{"type":31,"value":404}," ve ",{"type":26,"tag":34,"props":406,"children":407},{},[408],{"type":31,"value":409},"brand mention in AI response",{"type":31,"value":411}," metrikleri geliyor. Ölçüm için üç yöntem:",{"type":26,"tag":413,"props":414,"children":415},"ol",{},[416,427,461],{"type":26,"tag":417,"props":418,"children":419},"li",{},[420,425],{"type":26,"tag":34,"props":421,"children":422},{},[423],{"type":31,"value":424},"Programmatic monitoring",{"type":31,"value":426},": ChatGPT API, Perplexity API veya Google Search Labs'e otomatik sorgu at, response'ta markanın\u002Fdomain'in citation'da olup olmadığını parse et. Bu, n8n workflow'unda günde 100-200 sorgu ile yapılabilir (API maliyet: ~$0.002\u002Fsorgu ChatGPT-4 Turbo için). JSON response'u parse edip citation array'inden domain match ara.",{"type":26,"tag":417,"props":428,"children":429},{},[430,435,437,443,444,450,452,459],{"type":26,"tag":34,"props":431,"children":432},{},[433],{"type":31,"value":434},"First-party analitik",{"type":31,"value":436},": AI referral'ları Google Analytics'te ",{"type":26,"tag":101,"props":438,"children":440},{"className":439},[],[441],{"type":31,"value":442},"referrer=chatgpt.com",{"type":31,"value":362},{"type":26,"tag":101,"props":445,"children":447},{"className":446},[],[448],{"type":31,"value":449},"referrer=perplexity.ai",{"type":31,"value":451}," ile gelir. Bu trafiği segment et, landing page dağılımına bak. Hangi içerikler citation alıyor, hangisi almıyor — pattern analizi. Bunu ",{"type":26,"tag":326,"props":453,"children":456},{"href":454,"rel":455},"https:\u002F\u002Fwww.roibase.com.tr\u002Ftr\u002Fverianalizi",[330],[457],{"type":31,"value":458},"Veri Analizi & İçgörü Mühendisliği",{"type":31,"value":460}," çerçevesinde BigQuery'ye aktar, dbt model'iyle cohort analizi yap.",{"type":26,"tag":417,"props":462,"children":463},{},[464,469],{"type":26,"tag":34,"props":465,"children":466},{},[467],{"type":31,"value":468},"Embedding similarity benchmark",{"type":31,"value":470},": Kendi içeriğini embed et (OpenAI Embedding API), hedef query'leri de embed et, cosine similarity hesapla. Benzerlik skoru >0.75 olan içerikler citation'a girme potansiyeli yüksek. Bu, proaktif bir metric — içerik yayınlamadan önce citation şansını tahmin edebilirsin. Python snippet:",{"type":26,"tag":135,"props":472,"children":475},{"className":473,"code":474,"language":376,"meta":9,"style":9},"language-python shiki shiki-themes github-dark","import openai\nimport numpy as np\n\ndef cosine_similarity(vec1, vec2):\n    return np.dot(vec1, vec2) \u002F (np.linalg.norm(vec1) * np.linalg.norm(vec2))\n\ncontent_embedding = openai.Embedding.create(\n    input=\"Your article text...\",\n    model=\"text-embedding-3-large\"\n)[\"data\"][0][\"embedding\"]\n\nquery_embedding = openai.Embedding.create(\n    input=\"User query...\",\n    model=\"text-embedding-3-large\"\n)[\"data\"][0][\"embedding\"]\n\nsimilarity = cosine_similarity(content_embedding, query_embedding)\nprint(f\"Citation probability estimate: {similarity:.2f}\")\n",[476],{"type":26,"tag":101,"props":477,"children":478},{"__ignoreMap":9},[479,493,515,522,541,574,581,599,623,640,678,685,701,721,736,767,775,793],{"type":26,"tag":145,"props":480,"children":481},{"class":147,"line":148},[482,488],{"type":26,"tag":145,"props":483,"children":485},{"style":484},"--shiki-default:#F97583",[486],{"type":31,"value":487},"import",{"type":26,"tag":145,"props":489,"children":490},{"style":162},[491],{"type":31,"value":492}," openai\n",{"type":26,"tag":145,"props":494,"children":495},{"class":147,"line":158},[496,500,505,510],{"type":26,"tag":145,"props":497,"children":498},{"style":484},[499],{"type":31,"value":487},{"type":26,"tag":145,"props":501,"children":502},{"style":162},[503],{"type":31,"value":504}," numpy ",{"type":26,"tag":145,"props":506,"children":507},{"style":484},[508],{"type":31,"value":509},"as",{"type":26,"tag":145,"props":511,"children":512},{"style":162},[513],{"type":31,"value":514}," np\n",{"type":26,"tag":145,"props":516,"children":517},{"class":147,"line":168},[518],{"type":26,"tag":145,"props":519,"children":520},{"emptyLinePlaceholder":180},[521],{"type":31,"value":183},{"type":26,"tag":145,"props":523,"children":524},{"class":147,"line":176},[525,530,536],{"type":26,"tag":145,"props":526,"children":527},{"style":484},[528],{"type":31,"value":529},"def",{"type":26,"tag":145,"props":531,"children":533},{"style":532},"--shiki-default:#B392F0",[534],{"type":31,"value":535}," cosine_similarity",{"type":26,"tag":145,"props":537,"children":538},{"style":162},[539],{"type":31,"value":540},"(vec1, vec2):\n",{"type":26,"tag":145,"props":542,"children":543},{"class":147,"line":186},[544,549,554,559,564,569],{"type":26,"tag":145,"props":545,"children":546},{"style":484},[547],{"type":31,"value":548},"    return",{"type":26,"tag":145,"props":550,"children":551},{"style":162},[552],{"type":31,"value":553}," np.dot(vec1, vec2) ",{"type":26,"tag":145,"props":555,"children":556},{"style":484},[557],{"type":31,"value":558},"\u002F",{"type":26,"tag":145,"props":560,"children":561},{"style":162},[562],{"type":31,"value":563}," (np.linalg.norm(vec1) ",{"type":26,"tag":145,"props":565,"children":566},{"style":484},[567],{"type":31,"value":568},"*",{"type":26,"tag":145,"props":570,"children":571},{"style":162},[572],{"type":31,"value":573}," np.linalg.norm(vec2))\n",{"type":26,"tag":145,"props":575,"children":576},{"class":147,"line":195},[577],{"type":26,"tag":145,"props":578,"children":579},{"emptyLinePlaceholder":180},[580],{"type":31,"value":183},{"type":26,"tag":145,"props":582,"children":583},{"class":147,"line":20},[584,589,594],{"type":26,"tag":145,"props":585,"children":586},{"style":162},[587],{"type":31,"value":588},"content_embedding ",{"type":26,"tag":145,"props":590,"children":591},{"style":484},[592],{"type":31,"value":593},"=",{"type":26,"tag":145,"props":595,"children":596},{"style":162},[597],{"type":31,"value":598}," openai.Embedding.create(\n",{"type":26,"tag":145,"props":600,"children":601},{"class":147,"line":211},[602,608,612,618],{"type":26,"tag":145,"props":603,"children":605},{"style":604},"--shiki-default:#FFAB70",[606],{"type":31,"value":607},"    input",{"type":26,"tag":145,"props":609,"children":610},{"style":484},[611],{"type":31,"value":593},{"type":26,"tag":145,"props":613,"children":615},{"style":614},"--shiki-default:#9ECBFF",[616],{"type":31,"value":617},"\"Your article text...\"",{"type":26,"tag":145,"props":619,"children":620},{"style":162},[621],{"type":31,"value":622},",\n",{"type":26,"tag":145,"props":624,"children":625},{"class":147,"line":219},[626,631,635],{"type":26,"tag":145,"props":627,"children":628},{"style":604},[629],{"type":31,"value":630},"    model",{"type":26,"tag":145,"props":632,"children":633},{"style":484},[634],{"type":31,"value":593},{"type":26,"tag":145,"props":636,"children":637},{"style":614},[638],{"type":31,"value":639},"\"text-embedding-3-large\"\n",{"type":26,"tag":145,"props":641,"children":642},{"class":147,"line":228},[643,648,653,658,664,668,673],{"type":26,"tag":145,"props":644,"children":645},{"style":162},[646],{"type":31,"value":647},")[",{"type":26,"tag":145,"props":649,"children":650},{"style":614},[651],{"type":31,"value":652},"\"data\"",{"type":26,"tag":145,"props":654,"children":655},{"style":162},[656],{"type":31,"value":657},"][",{"type":26,"tag":145,"props":659,"children":661},{"style":660},"--shiki-default:#79B8FF",[662],{"type":31,"value":663},"0",{"type":26,"tag":145,"props":665,"children":666},{"style":162},[667],{"type":31,"value":657},{"type":26,"tag":145,"props":669,"children":670},{"style":614},[671],{"type":31,"value":672},"\"embedding\"",{"type":26,"tag":145,"props":674,"children":675},{"style":162},[676],{"type":31,"value":677},"]\n",{"type":26,"tag":145,"props":679,"children":680},{"class":147,"line":236},[681],{"type":26,"tag":145,"props":682,"children":683},{"emptyLinePlaceholder":180},[684],{"type":31,"value":183},{"type":26,"tag":145,"props":686,"children":687},{"class":147,"line":245},[688,693,697],{"type":26,"tag":145,"props":689,"children":690},{"style":162},[691],{"type":31,"value":692},"query_embedding ",{"type":26,"tag":145,"props":694,"children":695},{"style":484},[696],{"type":31,"value":593},{"type":26,"tag":145,"props":698,"children":699},{"style":162},[700],{"type":31,"value":598},{"type":26,"tag":145,"props":702,"children":703},{"class":147,"line":253},[704,708,712,717],{"type":26,"tag":145,"props":705,"children":706},{"style":604},[707],{"type":31,"value":607},{"type":26,"tag":145,"props":709,"children":710},{"style":484},[711],{"type":31,"value":593},{"type":26,"tag":145,"props":713,"children":714},{"style":614},[715],{"type":31,"value":716},"\"User query...\"",{"type":26,"tag":145,"props":718,"children":719},{"style":162},[720],{"type":31,"value":622},{"type":26,"tag":145,"props":722,"children":723},{"class":147,"line":262},[724,728,732],{"type":26,"tag":145,"props":725,"children":726},{"style":604},[727],{"type":31,"value":630},{"type":26,"tag":145,"props":729,"children":730},{"style":484},[731],{"type":31,"value":593},{"type":26,"tag":145,"props":733,"children":734},{"style":614},[735],{"type":31,"value":639},{"type":26,"tag":145,"props":737,"children":738},{"class":147,"line":270},[739,743,747,751,755,759,763],{"type":26,"tag":145,"props":740,"children":741},{"style":162},[742],{"type":31,"value":647},{"type":26,"tag":145,"props":744,"children":745},{"style":614},[746],{"type":31,"value":652},{"type":26,"tag":145,"props":748,"children":749},{"style":162},[750],{"type":31,"value":657},{"type":26,"tag":145,"props":752,"children":753},{"style":660},[754],{"type":31,"value":663},{"type":26,"tag":145,"props":756,"children":757},{"style":162},[758],{"type":31,"value":657},{"type":26,"tag":145,"props":760,"children":761},{"style":614},[762],{"type":31,"value":672},{"type":26,"tag":145,"props":764,"children":765},{"style":162},[766],{"type":31,"value":677},{"type":26,"tag":145,"props":768,"children":770},{"class":147,"line":769},16,[771],{"type":26,"tag":145,"props":772,"children":773},{"emptyLinePlaceholder":180},[774],{"type":31,"value":183},{"type":26,"tag":145,"props":776,"children":778},{"class":147,"line":777},17,[779,784,788],{"type":26,"tag":145,"props":780,"children":781},{"style":162},[782],{"type":31,"value":783},"similarity ",{"type":26,"tag":145,"props":785,"children":786},{"style":484},[787],{"type":31,"value":593},{"type":26,"tag":145,"props":789,"children":790},{"style":162},[791],{"type":31,"value":792}," cosine_similarity(content_embedding, query_embedding)\n",{"type":26,"tag":145,"props":794,"children":796},{"class":147,"line":795},18,[797,802,807,812,817,822,827,832,837,842],{"type":26,"tag":145,"props":798,"children":799},{"style":660},[800],{"type":31,"value":801},"print",{"type":26,"tag":145,"props":803,"children":804},{"style":162},[805],{"type":31,"value":806},"(",{"type":26,"tag":145,"props":808,"children":809},{"style":484},[810],{"type":31,"value":811},"f",{"type":26,"tag":145,"props":813,"children":814},{"style":614},[815],{"type":31,"value":816},"\"Citation probability estimate: ",{"type":26,"tag":145,"props":818,"children":819},{"style":660},[820],{"type":31,"value":821},"{",{"type":26,"tag":145,"props":823,"children":824},{"style":162},[825],{"type":31,"value":826},"similarity",{"type":26,"tag":145,"props":828,"children":829},{"style":484},[830],{"type":31,"value":831},":.2f",{"type":26,"tag":145,"props":833,"children":834},{"style":660},[835],{"type":31,"value":836},"}",{"type":26,"tag":145,"props":838,"children":839},{"style":614},[840],{"type":31,"value":841},"\"",{"type":26,"tag":145,"props":843,"children":844},{"style":162},[845],{"type":31,"value":846},")\n",{"type":26,"tag":27,"props":848,"children":849},{},[850],{"type":31,"value":851},"Bu metric'i içerik üretim pipeline'ına entegre et — yayınlamadan önce similarity \u003C0.70 olan içerikleri rewite et veya semantic expansion yap.",{"type":26,"tag":42,"props":853,"children":855},{"id":854},"rekabetçi-dinamikler-ve-tradeofflar",[856],{"type":31,"value":857},"Rekabetçi Dinamikler ve Tradeoff'lar",{"type":26,"tag":27,"props":859,"children":860},{},[861,863,868],{"type":31,"value":862},"GEO'nun açık olmayan tarafı: ",{"type":26,"tag":34,"props":864,"children":865},{},[866],{"type":31,"value":867},"zero-click search artışı",{"type":31,"value":869},". LLM doğrudan cevap veriyor, kullanıcı siteye gelmiyor. Citation alıyorsun ama trafik gelmiyor. Bu, featured snippet sorununun LLM versiyonu. Tradeoff: brand awareness vs. direct traffic. Eğer conversion funnel'ın top-of-funnel'da brand recall'a bağlıysa (örn: B2B SaaS), GEO işe yarıyor — karar aşamasında \"bu markayı duymuştum\" etkisi yaratıyor. Eğer funnel transactional (e-commerce checkout), doğrudan trafik lazım, GEO yeterli değil.",{"type":26,"tag":27,"props":871,"children":872},{},[873,875,880],{"type":31,"value":874},"İkinci tradeoff: ",{"type":26,"tag":34,"props":876,"children":877},{},[878],{"type":31,"value":879},"content velocity vs. depth",{"type":31,"value":881},". LLM'ler fresh content'i prioritize ediyor (güncel tarih embedding'de sinyal). Hızlı publish yaparak citation şansı artırabilirsin, ama shallow content'ler uzun vadede authority kaybettiriyor. Dengeli yaklaşım: core pillar content'i 2000+ kelime deep yap (GEO için anchor), supporting content'i 800-1000 kelime rapid publish yap (freshness için). Pillar content'e internal link ver, supporting content'ten. Bu, topical authority clusterı oluşturuyor — LLM'ler related content'leri birlikte görünce domain authority sinyali alıyor.",{"type":26,"tag":27,"props":883,"children":884},{},[885,887,892,894,900,901,906,907,913,914,919,921,927,928,934,936,942,943,949],{"type":31,"value":886},"Üçüncü tradeoff: ",{"type":26,"tag":34,"props":888,"children":889},{},[890],{"type":31,"value":891},"schema.org usage",{"type":31,"value":893},". Structured data LLM'lere sinyal veriyor, ama over-optimization spam olarak algılanabiliyor. Google'ın public guideline'ı: schema kullan ama abartma. GEO için kritik schema'lar: ",{"type":26,"tag":101,"props":895,"children":897},{"className":896},[],[898],{"type":31,"value":899},"Article",{"type":31,"value":378},{"type":26,"tag":101,"props":902,"children":904},{"className":903},[],[905],{"type":31,"value":106},{"type":31,"value":378},{"type":26,"tag":101,"props":908,"children":910},{"className":909},[],[911],{"type":31,"value":912},"HowTo",{"type":31,"value":378},{"type":26,"tag":101,"props":915,"children":917},{"className":916},[],[918],{"type":31,"value":305},{"type":31,"value":920},". ",{"type":26,"tag":101,"props":922,"children":924},{"className":923},[],[925],{"type":31,"value":926},"Organization",{"type":31,"value":404},{"type":26,"tag":101,"props":929,"children":931},{"className":930},[],[932],{"type":31,"value":933},"WebSite",{"type":31,"value":935}," zaten olmalı. ",{"type":26,"tag":101,"props":937,"children":939},{"className":938},[],[940],{"type":31,"value":941},"Review",{"type":31,"value":362},{"type":26,"tag":101,"props":944,"children":946},{"className":945},[],[947],{"type":31,"value":948},"Product",{"type":31,"value":950}," schema'sını içerikte yoksa ekleme — bu, manual action riskine giriyor ve LLM'ler de inconsistency'yi yakalayabiliyor (content-schema mismatch).",{"type":26,"tag":42,"props":952,"children":954},{"id":953},"uzun-vadeli-strateji-ai-first-content-paradigması",[955],{"type":31,"value":956},"Uzun Vadeli Strateji — AI-First Content Paradigması",{"type":26,"tag":27,"props":958,"children":959},{},[960,962,967],{"type":31,"value":961},"2026'dan sonra content stratejisi şu eksende dönüyor: ",{"type":26,"tag":34,"props":963,"children":964},{},[965],{"type":31,"value":966},"human-readable, machine-optimized",{"type":31,"value":968},". İçerik hem okuyucuya hem LLM'e hitap etmeli. Bu, token-efficient yazma disiplini gerektiriyor — her kelime sinyal taşımalı. Ayrıca, prompt engineering mindset'i content writer'a girmeli. \"Kullanıcı ne arar?\" yerine \"LLM hangi context'te bu içeriği citation'a alır?\" sorusu.",{"type":26,"tag":27,"props":970,"children":971},{},[972],{"type":31,"value":973},"GEO'nun brand equity'ye etkisi uzun vadede ortaya çıkıyor. Citation rate artışı, marka recall'ı, decision-making funnel'da referans olma — bu metrikler attribution modelinde dolaylı. İlk 6 ayda doğrudan ROI göremeyebilirsin, ama 12. ayda \"organik brand search artışı\" ve \"assisted conversion rate\" yükselmeye başlıyor. Bu, SEO'nun 2010'lardaki durumuna benziyor — erken adopter'lar avantaj kazanıyor, late mover'lar market share kaybediyor.",{"type":26,"tag":27,"props":975,"children":976},{},[977,979,984,986,992],{"type":31,"value":978},"Son not: ",{"type":26,"tag":34,"props":980,"children":981},{},[982],{"type":31,"value":983},"AI safety ve bias",{"type":31,"value":985}," riski. LLM'ler citation'da bias gösterebiliyor (domain bias, geography bias, language bias). Örneğin, ChatGPT ABD merkezli içerikleri Türkiye merkezli içeriklere göre daha sık citation'a alabiliyor (embedding modelinin training data'sından kaynaklı). Bu, GEO stratejisinde compensate edilmeli — Türkçe içerik için İngilizce abstract\u002Fsummary ekle, schema'da ",{"type":26,"tag":101,"props":987,"children":989},{"className":988},[],[990],{"type":31,"value":991},"inLanguage",{"type":31,"value":993}," field'ını net belirt. AI overviews'da görünmek, modelin bias'ını anlamak ve ona göre içerik mimarisi kurmaktan geçiyor.",{"type":26,"tag":27,"props":995,"children":996},{},[997],{"type":31,"value":998},"GEO, klasik SEO'nun evrim geçirmiş hâli değil, yeni bir disiplin. Arama motoru değil, yanıt motoru için optimizasyon. Attribution window'u LLM'in context window'u, ranking sinyali embedding similarity, backlink authority citation density. Bu paradigmada, markanı ChatGPT'nin cevabına yerleştirmek, prompt engineering ile içerik mimarisini birleştirmeyi gerektiriyor. İlk adım: mevcut içerik envanterini token efficiency ve semantic density lens'inden audit et, citation şansı düşük içerikleri rewite et veya retire et. İkinci adım: first-party veri ve unique insight'ları citable format'a dönüştür. Üçüncü adım: programmatic monitoring kur, citation rate'i haftalık track et, pattern'leri iteration'a dönüştür.",{"type":26,"tag":1000,"props":1001,"children":1002},"style",{},[1003],{"type":31,"value":1004},"html .default .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}html .shiki span {color: var(--shiki-default);background: var(--shiki-default-bg);font-style: var(--shiki-default-font-style);font-weight: var(--shiki-default-font-weight);text-decoration: var(--shiki-default-text-decoration);}",{"title":9,"searchDepth":168,"depth":168,"links":1006},[1007,1008,1009,1010,1011,1012],{"id":44,"depth":158,"text":47},{"id":72,"depth":158,"text":75},{"id":284,"depth":158,"text":287},{"id":389,"depth":158,"text":392},{"id":854,"depth":158,"text":857},{"id":953,"depth":158,"text":956},"content:tr:ai:geo-markani-chatgptnin-cevabina-yerlestirmek.md","content","tr\u002Fai\u002Fgeo-markani-chatgptnin-cevabina-yerlestirmek.md","tr\u002Fai\u002Fgeo-markani-chatgptnin-cevabina-yerlestirmek","md",1778152763123]