Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

unable to get data into influxDB #1

Open
anubisg1 opened this issue Feb 28, 2019 · 2 comments
Open

unable to get data into influxDB #1

anubisg1 opened this issue Feb 28, 2019 · 2 comments
Labels
documentation question Further information is requested

Comments

@anubisg1
Copy link

anubisg1 commented Feb 28, 2019

Hello,

i am collecting data from nexus 9000. I am able to send data to kafka without an issue.
Inspector works well too (i see data into the dump file) but i seem to be completely unable to get any data into influx DB.

This is what i get into inspectordump.txt

------- 2019-02-28 21:15:12.755654432 +0000 UTC m=+258.412984901 -------
Summary: GPB(common) Message [172.31.1.10:23105()//Cisco-NX-OS-device:System/procsys-items/sysload-items msg len: 360]
{
    "Source": "172.31.1.10:23105",
    "Telemetry": {
        "node_id_str": "leaf101-N93180YC-EX",
        "subscription_id_str": "1",
        "encoding_path": "/Cisco-NX-OS-device:System/procsys-items/sysload-items",
        "collection_id": 957,
        "collection_start_time": 0,
        "msg_timestamp": 1551388902456,
        "data_gpbkv": [],
        "data_gpb": null,
        "collection_end_time": 0
    },
    "Rows": [
        {
            "Timestamp": 0,
            "Keys": {
                "/Cisco-NX-OS-device:System/procsys-items/sysload-items": "/Cisco-NX-OS-device:System/procsys-items/sysload-items"
            },
            "Content": {
                "": {
                    "sysload-items": {
                        "": {
                            "loadAverage15m": "0.450000",
                            "loadAverage1m": "1.150000",
                            "loadAverage5m": "0.680000",
                            "name": "sysload",
                            "runProc": 1,
                            "totalProc": 360
                        }
                    }
                }
            }
        }
    ]
}

and this is what i have in my metrics.json.

[
        {
                "basepath" : "Cisco-NX-OS-device:System/procsys-items/sysload-items",
                "spec" : {
                        "fields" : [
                                {"name":"loadAverage15m"},
                                {"name":"loadAverage1m"},
                                {"name":"loadAverage5m"},
                                {"name":"name"},
                                {"name":"runProc"},
                                {"name":"totalProc"}
                        ]
                }
        }
]

In this file i used the following basepaths, but either way doesn't make any difference

                "basepath" : "Cisco-NX-OS-device:System/procsys-items/sysload-items",

                "basepath" : "/Cisco-NX-OS-device:System/procsys-items/sysload-items",

here is the conf file section for influxdb

[metrics_influx]
stage=xport_output
type= metrics
file=/etc/pipeline/metrics.json
datachanneldepth=10000
output=influx
influx=http://influxdb:8086
database=telemetry
workers=10
dump=/etc/pipeline/metricsdump.txt
username=client
password=FiCElcS3e0D4HL+bSejk5eFymwrxB2IJVmK7AFgCJVkn9bdJ1RDfRL3diGCEqqjvAY7jn1ux1V9JtpI+PpJRza7KjTUz/8jjapymVIxpoC8alwpxpIIeau41vCiTRCWPC6cwKBvvFTYBYa2TUR3b3TOMyibOEJg9edbAcIRSraFiwzrAhtTq0O2LHMFEnNGiLuzJ/DNPo281xA0oVMQYuyy7wC9AFwCXmZvpk0pwJI9PT2UJ5TVdf0uom4tEQ/ay8YrPXmgCjvjWVp6+eG2eLJBTXHx+hL4+tcLVRz/3stogcQVyxJSrpjn5oLQEZgzJLvWHKjGbjFBChsCVkxPNVrFJH2vri7SUzzWas/4OGXNOZ+lqWXQel+ATA39LPWbjO81+6huVAsj4xFjqHWbEQ8m3NoRJVlR0Nsg9vKBHjaNhtkGV/AZmT6fWVFyeQy8IvEIpb5MOnCQ6rDzdZxgU0LkgkkAl99dMOBkuEdwMbI3vZzd7CCLDz8qALDccFIwA7kszyJFUzKaEf540mqffbWOJOK5tJ667ewarrjQpW+2nbt7HgVZj8kgU1B/cwwxv6qa2QKi/7yH7HN3nC1a8VJ1844Dsx8FG3Equ1n6U2/OeX3Z4ya/H0DazCXa1/fQHSHpNL+uyjroN9JLW5fAHRGySFVq4CdiAJpyF4B7Pw2I=

and finally this is what i see in the logs

time="2019-02-28 21:08:21.244952" level=info msg="Conductor says hello, loading config" config=/etc/pipeline/pipeline.conf debug=false fluentd= logfile=/etc/pipeline/pipeline.dump maxthreads=4 tag
=pipeline version=unspecified
time="2019-02-28 21:08:21.245668" level=info msg="Conductor starting up section" name=conductor section=mykafka stage=xport_output tag=pipeline
time="2019-02-28 21:08:21.245713" level=info msg="Conductor starting up section" name=conductor section=inspector stage=xport_output tag=pipeline
time="2019-02-28 21:08:21.245738" level=info msg="Conductor starting up section" name=conductor section=metrics_influx stage=xport_output tag=pipeline
time="2019-02-28 21:08:21.246676" level=info msg="Metamonitoring: serving pipeline metrics to prometheus" name=default resource=/metrics server=":8989" tag=pipeline
time="2019-02-28 21:08:21.250666" level=info msg="Starting up tap" countonly=false filename=/etc/pipeline/inpesctordump.txt name=inspector streamSpec="&{2 <nil>}" tag=pipeline
time="2019-02-28 21:08:21.265310" level=info msg="setup authentication" authenticator="http://influxdb:8086" name=metrics_influx pem=/etc/pipeline/pipeline_key tag=pipeline username=client
time="2019-02-28 21:08:21.265393" level=info msg="setup metrics collection" basepath="Cisco-NX-OS-device:System/procsys-items/sysload-items" name=metrics_influx tag=pipeline
time="2019-02-28 21:08:21.265911" level=info msg="Conductor starting up section" name=conductor section=grpcdialout stage=xport_input tag=pipeline
time="2019-02-28 21:08:21.267159" level=info msg="Setting up workers" database=telemetry influx="http://influxdb:8086" name=metrics_influx tag=pipeline workers=4 xport_type=influx
time="2019-02-28 21:08:21.267158" level=info msg="gRPC starting block" encap=gpb name=grpcdialout server=":57500" tag=pipeline type="pipeline is SERVER"
time="2019-02-28 21:08:21.267310" level=info msg="gRPC: Start accepting dialout sessions" encap=gpb name=grpcdialout server=":57500" tag=pipeline type="pipeline is SERVER"
time="2019-02-28 21:08:21.317736" level=info msg="kafka producer configured" brokers="[rldv0217.gcsc.att.com:9092]" name=mykafka requiredAcks=0 streamSpec="&{2 <nil>}" tag=pipeline topic=telemetry
time="2019-02-28 21:08:21.549587" level=info msg="gRPC: Receiving dialout stream" encap=gpb name=grpcdialout peer="172.31.1.10:22999" server=":57500" tag=pipeline type="pipeline is SERVER"
time="2019-02-28 21:08:22.554280" level=info msg="gRPC: Receiving dialout stream" encap=gpb name=grpcdialout peer="172.31.1.11:30387" server=":57500" tag=pipeline type="pipeline is SERVER"
time="2019-02-28 21:08:23.669858" level=info msg="gRPC: Receiving dialout stream" encap=gpb name=grpcdialout peer="172.31.1.10:22999" server=":57500" tag=pipeline type="pipeline is SERVER"
time="2019-02-28 21:08:24.675529" level=info msg="gRPC: Receiving dialout stream" encap=gpb name=grpcdialout peer="172.31.1.11:30387" server=":57500" tag=pipeline type="pipeline is SERVER"
time="2019-02-28 21:08:25.792665" level=info msg="gRPC: Receiving dialout stream" encap=gpb name=grpcdialout peer="172.31.1.10:22999" server=":57500" tag=pipeline type="pipeline is SERVER"
time="2019-02-28 21:08:26.796743" level=info msg="gRPC: Receiving dialout stream" encap=gpb name=grpcdialout peer="172.31.1.11:30387" server=":57500" tag=pipeline type="pipeline is SERVER"
time="2019-02-28 21:08:26.915330" level=info msg="gRPC: Receiving dialout stream" encap=gpb name=grpcdialout peer="172.31.1.11:30387" server=":57500" tag=pipeline type="pipeline is SERVER"
time="2019-02-28 21:08:27.913142" level=info msg="gRPC: Receiving dialout stream" encap=gpb name=grpcdialout peer="172.31.1.10:22999" server=":57500" tag=pipeline type="pipeline is SERVER"
@remingtonc
Copy link
Contributor

Hi @anubisg1 - metrics.json is a very frustrating piece of this to get right. Did you ever manage to resolve this issue?

@anubisg1
Copy link
Author

no, never did. We abandoned completely pipeline at the end.

@remingtonc remingtonc added bug Something isn't working question Further information is requested documentation and removed bug Something isn't working labels May 30, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants