| eval description_short=if(isnotnull(trim(description," ")),substr(description,0,127),""), description_short=if((len(description_short) > 126),(description_short. | rex field=description mode=sed "s/^\\s //g" | eval si_tgt_index=coalesce(si_tgt_index,'action.summary_index._name'), ol_tgt_filename=coalesce(ol_tgt_filename,'') substr(md5(title),(length - 15))),title), user='eai:acl.owner', "eai:acl.owner"=if(match(user,""),rtrim('eai:acl.owner',"="),user), app_name='eai:acl.app', "eai:acl.app"=if(match(app_name,""),rtrim('eai:acl.app',"="),app_name), commands=split(search,"|"), ol_cmd=mvindex(commands,mvfind(commands,"outputlookup")), si_cmd=mvindex(commands,mvfind(commands,"collect")) | eval length=len(md5(title)), search_title=if(match(title,""),("RMD5". | rest "/servicesNS/-/-/saved/searches" timeout=300 splunk_server=* It returns all alerts that are not part of a default Splunk app and where the alerts are not disabled. I've had pretty good success with the following search. |rest/servicesNS/-/-/saved/searches | search ack=1 | fields title description search disabled triggered_alert_count actions verity cron_schedule |rest /servicesNS/admin/-/alerts/alert_actions | rest /services/saved/searches | search title=*| rename title AS "Title", description AS "Description", alert_threshold AS "Threshold", cron_schedule AS "Cron Schedule", search AS "Search", AS "Email" ,alert_comparator AS "Comparison", dispatch.earliest_time AS "frequency", verity AS "SEV" ,author AS "Author" ,disabled AS "Disabled-True"| eval Severity=case(SEV = "5", "Critical-5", SEV = "4", "High-4",SEV = "3", "Warning-3",SEV = "2", "Low-2",SEV = "1", "Info-1") | table Title, Description, Threshold, Comparison, "Cron Schedule", frequency, Severity,Search, Email,Author,Disabled-True I used below queries, but did not give proper results. Once you have finished with maintenance, you should disable maintenance mode.I would like to list all the alerts that are setup by users not by splunk apps like ITSI/DMC using REST API. Put the cluster into maintenance mode before starting maintenance activity. Maintenance mode works the same for single-site and multisite clusters. This period is usually short, often just a few seconds, but even a short period of primary fixup can affect in-progress searches. In addition, if the cluster loses even a single peer node while in maintenance mode, it can potentially return incomplete results for searches running during the subsequent period of primary fixup. Similarly, if the cluster loses peer nodes in numbers equal to or greater than the replication factor, it also loses its valid state for the duration of maintenance mode. See Indexer cluster states to understand the implications of this. Therefore, if the cluster loses a peer node during maintenance mode, it can be operating under a valid but incomplete state. This means that the manager node does not enforce replication factor or search factor policy during maintenance mode. In particular, the cluster does not perform fixup that entails replicating buckets or converting buckets from non-searchable to searchable. The manager node will attempt, when necessary, to reassign primaries to available searchable bucket copies. The only bucket fix-up that occurs during maintenance mode is primary fixup. To prevent buckets from rolling unnecessarily, maintenance mode halts most bucket fix-up activity. The effect of maintenance mode on cluster operation A message stating that maintenance mode is running appears on the manager node dashboard. Note: The CLI commands splunk apply cluster-bundle and splunk rolling-restart incorporate maintenance mode functionality into their behavior by default, so you do not need to invoke maintenance mode explicitly when you run those commands. Similarly, if you need to upgrade your peers or otherwise temporarily offline several peers, you can invoke maintenance mode to forestall bucket rolling during that time. ![]() This can be useful for system maintenance work that generates repeated network errors, such as network reconfiguration. To stop this behavior, you can temporarily put the cluster into maintenance mode. Situations that can generate an unacceptable number of small buckets include persistent network problems or repeated offlining of peers. While this behavior is generally beneficial to the health of the indexer cluster, it can result in many small buckets across the cluster, if errors occur frequently. Because it halts critical bucket fixup activity, use maintenance mode only when necessary.Ĭertain conditions can generate errors during hot bucket replication and cause the source peer to roll the bucket. It is useful when performing peer upgrades and other maintenance activities on an indexer cluster. Maintenance mode halts most bucket fixup activity and prevents frequent rolling of hot buckets.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |